Video compression is always a touchy subject. The benefits for are clear, but much compression bias dates back to the historic analogue NTSC and PAL systems. In the digital IP world, is it now time to take a pragmatic look at video compression?
It’s years since I’ve seen the luma cross talk on a presenter’s herringbone jacket, or the chroma cross talk from transitions on highly saturated colors manifesting as small luminance dots, and there’s good reason for this, these artifacts were common in analogue broadcasting but have all but disappeared into the history books since the adoption of digital transmission – we’ve moved on.
Television is an illusion, there are no moving pictures in television, just a series of still images played very quickly to give the perception of motion. That said, even with baseband video, compression starts at the image sensor with frame rate sampling. In effect we’ve taken an infinitely varying light-scene, filtered it through a lens and iris, sampled it at the image sensor, and then filtered it again to give us our RGB.
My point is, even baseband video in its RGB form is compressed by the action of acquisition and sampling. Motion artifacts are prevalent even in the baseband RGB digital format and we know that increasing the frame rate improves motion representation. In other words, we have always had video compression in our baseband “uncompressed” system.
With that in mind, is it fair to say we should be thinking in terms of how much compression is acceptable for the system we’re working with? As opposed to the binary compression versus no compression argument.
Many recording servers use compression to achieve the SDD read and write speeds. When working in the SDI domain a server with an SDI input and output may well be compressing the signal when recording to disk, and then uncompressing it during playback.
Even with these examples it’s clear that we have unseen (or unrecognized) compression going on within the broadcast chain, which leads me back to the suggestion that we shouldn’t be thinking in terms of compression versus no compression, but instead asking the question how much compression is enough.
IP provides a fantastic opportunity to rethink many of our workflows. As IP is data agnostic, then we can carry any type of video we like, compressed, uncompressed, RGB, YUV, IQT, something we haven’t thought of yet, literally anything. The only challenge we have is making sure that the coding and decoding equipment understands the video format. I accept there are some caveats around timing, and we have to think very closely about latency and buffer sizes, but the option of sending compressed video opens up many doors for broadcasters.
Furthermore, if we have a common compression format that could be used by video servers and all the other infrastructure equipment, the need for continuous compression and decoding is negated. Wouldn’t this lead to a system that is far superior in quality as the concatenation effects would be greatly reduced?
Video compression technology has come on leaps and bounds in recent years with visually lossless codecs championing the cause. Is it now time to accept that compression is core to any broadcast facility, and we really only need to answer the question “how much is enough?”