Broadcast standards have stood the test of time and served us well. But are we now in a position where transport stream standards are running the risk of inhibiting innovation? Is there a better way?
From the very early days of television, standardization was an absolute necessity. The electronics of the 1930s were primitive and we needed standards to guarantee home television sets and studio monitors would synchronize harmoniously to the electron scanning beam within the tube cameras.
We haven’t used tubes for at least thirty years, but we are still saddled with standardization based on their specific requirements. In my opinion, interlace and fractional frame rates are the worse two legacy systems we still have from old-school television. And anybody reading John Watkinson’s series on “Is Gamma Still Needed” may think gamma is a close second. The challenge we have is that we are still thinking in terms of the 1930s technology and as such are hindering broadcast television development.
As more broadcasters adopt IP, we have a real opportunity to develop our thinking and working practices. IP isn’t just about moving from SDI and AES; it has the potential to allow us to transcend all our previous assumptions and biases. To understand this better, video engineers have much to learn from our colleagues specializing in audio, who’ve been using IP for over twenty years.
The Session Description Protocol (SDP) used in audio over IP describes, amongst other parameters, a devices IP address, the codec used, sampling rate and bit rate. Each device can have its own parameters and is not constrained by the transport stream, as with AES and SDI. It’s ideally suited to IP as many different audio formats can be distributed over the IP network and receiving devices, such as sound consoles know how to decode the audio for processing.
If we apply the same principles to video, then we could use SDP type files to describe the format of the stream. For example, a cameras SDP would define the frame rate, number of horizontal and vertical pixels, and the color space etc. The point is that the video essence is abstracted away from the underlying transport stream so that the system becomes much more flexible and dynamic.
Imagine a production switcher that relies not on a fixed and static video format, but instead on individual SDPs to define each of the video inputs. We could mix broadcast cameras with computer generated images and a multitude of other compressed and uncompressed video sources without the need for external synchronization or processing. This would take the production switcher to a whole new level of productivity.
Computer resource is available now to make this happen, especially if we consider the amount of cloud processing available.
Using the transport stream to define the video and audio essence amounts to the tail wagging the dog. Vendors have a vested interest in providing interoperability and using well described agile descriptors that are easily accessible to define video formats creates a community of openness and flexibility. Afterall, “agile” is more than just a buzz word, it’s a way of life.
We don’t need committee meeting after committee meeting to dictate how vendors should be working. Instead, let them solve the problems the industry is demanding solutions to and demonstrate their agile methodologies, not be constrained by thoughts of line syncs and field syncs.
Our viewers are demanding progress at an incredible pace, and we need to keep up with them, not have another meeting. IP is data agnostic, why not use it that way?