Keeping Studio IP Media Streams Reliable - Part 2
Distributing error free IP media streams is only half the battle when building reliable broadcast infrastructures. SDP files must match their associated IP media essence or downstream equipment will not be able to decode it. In this article we dig deeper into SDP files and media IP streams to understand how to keep broadcast infrastructures dependable.
Other articles from this series.
Monitoring
The flexibility that IP infrastructures deliver is unprecedented in broadcast facilities. Not only can we distribute video and audio over COTS infrastructures, but we do not need to make special provisions for each of the media, control, and monitoring information being distributed throughout the network. For example, not only would a traditional broadcast facility have to provide separate video and audio routing matrices, but there would also need to be provision for RS422 control signals, GPIOs, and timecode. All these systems comprised their own dedicated sub-infrastructure requiring custom hardware that adds to the cost and complexity of the broadcast infrastructure. Changing these systems is often difficult and time consuming.
Modern broadcast and IT infrastructures predominately use ethernet and fiber transport streams to form the basis of their IP networks. WiFi will be used for bring-your-own-devices, but the streamed media will be limited to highly compressed proxy streams as it’s unlikely anybody would try and stream an ST2110 HD stream across a WiFi link due to the latencies and packet jitter involved. Transferring media to the outside world would often require MPLS or dedicated layer-2 networks to provide the greatest flexibility, resilience, data throughput and lowest possible latency. Furthermore, all these physical interfaces are standard connections within the IT industry and so will not require custom connectivity such as that found with SDI and AES.
Monitoring IP streamed media requires more than just monitoring a view of QoS elements such as packet jitter or packet loss. Although these are important and still the primary concern for most IP networks, there are other systems within the infrastructure that need equal attention. Essence layer monitoring, prior to video and audio decoding, includes checking for malformed RTP packets where out of sequence numbering could occur, incorrectly specified ST2110 specifications where the video frame rate in the essence and the SDP file do not match, and ST2110-21 issues where the wide and narrow gap definitions contradict each other causing issues with the transmission traffic shaping. Other conflicts such as poorly defined primary and backup packet timing deltas within ST2022-7 are difficult to detect.
Fig 2 – Example SDP file showing the various video parameters such as colorimetry and image size. The parameters in this file must match exactly those in the associated media stream
With all this in mind, it soon becomes clear that automated and exception monitoring are critical tools for the broadcast engineer. Having the ability to constantly monitor the high-level parameters and compare them to SDP files provides incredible insight into how reliably a network is operating as well as helping to find faults should they occur.
Deep Monitoring
When broadcasters are monitoring their IP networks, they not only have to look at the transport and streaming levels, but also must think about the intrinsic video and audio essence. It’s perfectly possible to have a valid IP data stream in terms of the transport and media but have unviewable or distorted pictures and sound.
Integrated monitoring systems that combine network and transport stream monitoring with video and audio essence monitoring allows broadcasters to hone in on a suspect media stream quickly and effectively. This is similar to the using traditional waveform and vector monitors, but they are combined with the media streams in an IP network.
The power of integrated monitoring becomes clear when we realize that the system can monitor tens, if not hundreds of media essence streams at the same time. The alternative would be to configure round-robin monitoring for waveform/vectorscopes across all of the streams, just one at a time. But with integrated monitoring, most, if not all the essences can be monitored and validated simultaneously.
Critical to all media streams is timing. ST2110 employs PTP and any discontinuities in the time base could easily cause picture and sound breakup and distortion. Continually monitoring the PTP reference and its association with the ST2110 packets will quickly flag any problems to the engineer.
PTP must be stable and any adverse fluctuations in the timing reference will cause buffer under and overruns, resulting in dropped packets and distorted video or audio. Having a system that is constantly monitoring the PTP parameters and determining their stability and accuracy is critical for broadcast IP infrastructures. Exception monitoring of timing and PTP will remove the need for the engineer to continuously look for anomalies within the system. Quite often the monitoring system will find problems that the engineer has no chance of detecting.
Systems such as ST2022-7 provide seamless protection for RTP based feeds including ST2022-6, ST2110-20/30 or MPEG compressed streams. Two identical streams of packets are provided by the sender using RTP sequencing. If one stream loses packets, then the receiver can switch to the secondary stream and use the RTP sequence number to seamlessly switch between the two. Although a small buffer is required at the receiver, ST2022-7 proves an incredibly useful solution where high resilience is needed. Also, when considering ST2022-7, a limit for how far apart the same two packets can be on the primary essence and the backup essence must be established. Again, this is difficult to achieve manually but forms part of the core of an integrated monitoring system.
An engineer would find it almost impossible to detect if the receiver was switching between the streams. When this is known, it may point to a more serious issue that must be addressed. Also, as the system is designed to work with lost packets, it’s not unreasonable to expect packets to be lost. But if too many packets are lost then an alarm should be raised and the best method of achieving this is not by having an engineer constantly looking at screens trying to determine if a fault has occurred, but instead, using an integrated monitoring system that can be set up to interrogate the receiver to determine if a certain number of losses have occurred so an alarm can be triggered, thus providing another level to exception monitoring.
Also, human errors and software bugs can be detected and dealt with by continually monitoring the contents of the SDP file and media essence parameters. Any differences between the two can be quickly flagged to the integrated monitoring system thus facilitating a speedy resolution.
Conclusion
Broadcast IP infrastructures are flexible, scalable, and resilient, but the price we pay for this is increased complexity. It’s unrealistic to expect an engineer to deeply understand all aspects of the IP network, IP flows, and media signals as the systems are just too complex. However, integrated exception monitoring allows errors to be automatically detected and alarms triggered so that the engineer can get on with diagnosing the problem quickly and efficiently.
Supported by
You might also like...
Brazil Adopts ATSC 3.0 For NextGen TV Physical Layer
The decision by Brazil’s SBTVD Forum to recommend ATSC 3.0 as the physical layer of its TV 3.0 standard after field testing is a particular blow to Japan’s ISDB-T, because that was the incumbent digital terrestrial platform in the country. C…
Broadcasting Innovations At Paris 2024 Olympic Games
France Télévisions was the standout video service performer at the 2024 Paris Summer Olympics, with a collection of technical deployments that secured the EBU’s Excellence in Media Award for innovations enabled by application of cloud-based IP production.
Standards: Part 18 - High Efficiency And Other Advanced Audio Codecs
Our series on Standards moves on to discussion of advancements in AAC coding, alternative coders for special case scenarios, and their management within a consistent framework.
HDR & WCG For Broadcast - Expanding Acquisition Capabilities With HDR & WCG
HDR & WCG do present new requirements for vision engineers, but the fundamental principles described here remain familiar and easily manageable.
What Does Hybrid Really Mean?
In this article we discuss the philosophy of hybrid systems, where assets, software and compute resource are located across on-prem, cloud and hybrid infrastructure.