The Last Mile Is The Hardest—Getting To Live IP
The DYVI production switcher introduces IP switching and connectivity to live production.
It is no secret that the media industry is moving towards an IP infrastructure. It is not so clear as to how much confusion this transition may cause. Media expert Gary Olson offers some guidance on this important evolution.
This applies to almost every aspect of technology. In Telecom, getting services in to the business or residential premises is the toughest. In both software and hardware development, finishing the code or putting all the components together as a completed product is the hardest part of the project.
Now let’s apply this to the transition to IP. In my NAB musings I questioned the fact that we accept file based workflow, metadata, file acceleration, cloud services and the introduction of SDN- Software Defined Networks but in the same breath industry leaders are still saying that IP is still a few years out. It’s all IP and has been for quite some time - except for live production. This task represents the last mile.
Confusion is still front of mind
There are a few initiatives starting to appear and just as in most of the transition to IP they are adding a touch of confusion. Handling time and sync seem to be the current speed bumps to getting IP direct from a camera to a server and then recorded. A similar issue involves taking an IP stream from the camera directly into a production switcher and get seamless switching between sources.
Then we have the challenge of inter-cuting between SDI and IP. We managed to figure it out going from analog to digital and SD to HD. Can we do it here?
SMPTE has the 2022 family of Standards for IP media creation and transport based on MPEG-TS. In addition, there is SMPTE ST 2059-1 & 2059-2, One of these is for a Precision Time Protocol (PTP) replacing timecode and the other is for timing reference (sync) to replace genlock. OOPS, not replace, I mean next generation time reference and next generation timing (sync) reference for “frame” accurate production switching. Here again, the intent is to have IP standards that can integrate live streams with SDI.
These will be layers (OSI Seven Logical Layer Model) in the IP stream and part of the encapsulated package of audio, video, metadata, control and communications.
One of the new processes in IP workflow is orchestration. This is next generation automation, controlling the movement of files and streams throughout the core infrastructure, directing content to the correct device or system with a command structure for the system to perform its functions.
Typical IP network with multiple clients, and storage. In this case, one network (top half) serves the business sector, the other (bottom half) is exclusively reserved and protected for content tasks.
There is a joint task force between EBU and AMWA with considerable participation from most of the industry vendors to create a standard (protocol) that all devices and systems will recognize as a command structure. This is the Framework for Interoperable Media Services or more familiarly known as FIMS. Think of it as RS422 for the IP world. This is great, everyone working together on a standard so devices can communicate with each other. Call it an IP version of “video out to video in” with the SMPTE standards.
SONY recently announced an encapsulating methodology (protocol) that addresses the transport of media and timing reference. This is in the form of a chip set they are proposing that would be embedded in all devices and systems to create the IP stream for true IP interoperability of streams.
According to conversations with EBU, FIMS will work together with the SONY technology encapsulating and transporting the media and the EBU FIMS protocol enabling devices and systems to understand what it is and what to do with it. And there is an EBU FIMS initiative to create an IP version of “video out to video in” based on the SMPTE standards.
We need a solution for live content
Playout systems are splicing, grooming and layering placing interstitial content, banners, lower thirds and snipes on all forms of programming from files on the air every day. Watching streaming on line seems to have the ability to seamlessly cut between program and commercials every time I try and scrub ahead in a show. Actually they have been doing it for a while. Is the challenge to do the same with a full resolution IP stream that much more daunting?
No matter the internal signal format, live production requires a familiar production GUI like that provided by this Grass Valley Kerrera production switcher.
At each technology transition point there were little boxes that solved all of these problems. We have A/D, D/A, Frame Sync, Up/Down converters, transcoders and transmuxers. It’s hard to have a conversation that doesn’t have API or XML in it. We now have middleware and all kinds of new stuff to integrate these systems. Why should live production be treated as different problem? If all content is converted to a stream before it enters the production switch, then there is no need to intercut between SDI and IP. Even now, SDI is encoded to IP for file recording, distribution and transport.
At NAB, according to comments posted on one of the social network discussion groups, SONY and Grass Valley informed those allowed past the double secret handshake and code word authentication, that they would be showing IP direct from a camera and IP direct into a production switch soon. Could that be IBC soon, CCW soon, NAB2016 soon or just coming soon. During my own NAB research a few server vendors were asked if they could ingest an IP stream directly and create a file. The answers were more often no than yes. The response was almost uniform, that it could be easily done if anyone wanted it or even asked.
We can create, transmit and transport IP streams, and have been doing it for a while. We have published and accepted standards. We have the test, measurement and monitoring technology. The IP networks are able to support it in bandwidth and performance.
Should this last mile be this hard?
Follow Gary Olson in his IP tutorial series "Smoothing the Rocky Road to IP"
The Anatomy of the IP Network, Part 1
The Anatomy of the IP Network, Part 2
You might also like...
Brazil Adopts ATSC 3.0 For NextGen TV Physical Layer
The decision by Brazil’s SBTVD Forum to recommend ATSC 3.0 as the physical layer of its TV 3.0 standard after field testing is a particular blow to Japan’s ISDB-T, because that was the incumbent digital terrestrial platform in the country. C…
Designing IP Broadcast Systems: System Monitoring
Monitoring is at the core of any broadcast facility, but as IP continues to play a more important role, the need to progress beyond video and audio signal monitoring is becoming increasingly important.
Broadcasting Innovations At Paris 2024 Olympic Games
France Télévisions was the standout video service performer at the 2024 Paris Summer Olympics, with a collection of technical deployments that secured the EBU’s Excellence in Media Award for innovations enabled by application of cloud-based IP production.
Standards: Part 18 - High Efficiency And Other Advanced Audio Codecs
Our series on Standards moves on to discussion of advancements in AAC coding, alternative coders for special case scenarios, and their management within a consistent framework.
HDR & WCG For Broadcast - Expanding Acquisition Capabilities With HDR & WCG
HDR & WCG do present new requirements for vision engineers, but the fundamental principles described here remain familiar and easily manageable.