A key advantage of IP networks is that engineers can troubleshoot much of the systems and equipment remotely.
The broadcast equipment industry is in the process of making the transition to IP based transport for video, audio and data. This has led to development of a suite of standards including SMPTE ST 2022-6 for encapsulation of uncompressed SDI within IP packets and SMPTE ST 2110 for live IP production carrying separate video, audio and data packets.
All of this was successfully demonstrated at the NAB 2017 Show in the IP Showcase booth. The demo showed separate routing of video, audio, and ANC data flows over IP networks in real-time to support production and playout applications, all synchronized by ST 2059 Precision Time Protocol (PTP). Based on this progress, it’s fair to say that anyone building a next generation production facility will likely base it on IP and equipment conforming to these standards.
While this is all fine, many broadcasters are looking at a steep learning curve as they ramp up. IP introduces many new skills requirements and technical challenges. These include jitter, latency and the risk of dropped packets and network asymmetry that results in different path delays upstream and downstream. IP is a complex set of bi-directional protocols requiring knowledge of these and the technical requirements for both the source and destination, before deployment.
Making the move to IP
Deploying IP for live video production applications is effectively the collision of the two worlds of video and network engineering. Video engineers are comfortable with the use of SDI, coaxial cable, patch panels, black burst and tri-level sync for timing and above all, monitoring signal quality. The challenge for the video engineer is to understand IT technologies and impact of an IT infrastructure on the video.
On the other hand, network engineers are familiar and comfortable with, IP flows, protocols, network traffic, router configuration, Precision Time Protocol (PTP) and Network Time Protocol (NTP) for timing. The biggest difference however is that in most data center applications, lost data can be re-sent. This is not the case with high bitrate video. The challenge for the network engineer is in understanding video technology and its impact on IT infrastructure.
While the ultimate goal may be an end-to-end IP infrastructure, huge investments in existing technology and workflows mean that video and network engineers will need the tools to diagnose and correlate both SDI and IP signal types. Some monitors convert IP inputs signal into an SDI signal at the front end, but such an approach lacks true IP media analysis or detailed diagnoses of IP traffic issues.
The ideal monitoring solution for a hybrid IP/SDI network is one that can perform a diverse variety of IP layer measurements as well as monitor video and audio content. Monitoring and ease of use are critical to ensuring QoS levels across complex broadcast environments that typically involve compressed and uncompressed video transmissions through SDI and IP signal paths.
Many of the issues that can cause problems in IP networks can be traced back to packet jitter. Excessive packet jitter can lead to buffer overflows and underflows causing dropped packets and stalled data flows. Other problems stem from timing delay and asymmetry of PTP packet flows. In hybrid SDI and IP workflows, it is also necessary to ensure that the relationship between the SDI and IP video is consistent to enable seamless frame accurate switching. This can be achieved by measuring the relationship between the black burst/tri-level sync and the PTP clock and making any necessary correction by skewing the SDI syncs with reference to the PTP clock.
Ensuring the delivery of superior QoS is difficult enough in today’s increasingly complex broadcast environments. This challenge becomes even harder if multiple tools are needed to test mixed SDI- and IP-based workflows due to the inherent lack of correlation.
To establish root causes of network errors, for instance, it is necessary to understand whether visible impairments are being caused by IP errors or if some other fault is causing the impairment. Figure 1 shows how a hybrid SDI/IP network monitoring tool can be used to track time-correlated video and IP errors.This is made possible by correlating the time stamps of the video errors and the RTP packet errors.
A video CRC error does not in itself confirm that the video is impaired making it desirable to use traditional monitoring methods such as picture and waveform displays as well as audio bars as shown in Figure 2.
In any digital system, jitter is any deviation from the regular periodicity of the signal. In IP networks jitter is the variation of the packet arrival interval at a receiver. If the network routers and switches are all configured and operating correctly, the most common cause of jitter is network congestion at router/switcher interfaces.
The rate of packets flowing out of the de-jitter buffer is known as the “drain rate.” The rate at which the buffer receives data is known as the “fill rate.” If the buffer size is too small then if the drain rate exceeds the fill rate, then the buffer will eventually underflow, resulting in stalled packet flow. If the sink rate exceeds the drain rate, then eventually the buffer will overflow, resulting in packet loss. However, if the buffer size is too large, then the network element will introduce excessive latency. Jitter can be measured by plotting the time-stamps of the packet inter-arrival times versus time as shown in Figure 3.
This is useful to identify variances in jitter over time, but it is also useful to be able to plot the distribution of inter-arrival intervals vs. frequency of occurrence as a histogram. If the jitter value is so large that it causes packets to be received out of the range of the de-jitter buffer, then the out-of-range packets are dropped. Being able to identify outliers such as the example in Figure 4 is an aid in identifying if the network jitter performance is either likely to or already has caused packet loss.
A series of packets with long inter-arrival intervals will inevitably result in a corresponding burst of packets with short inter-arrival intervals. It is this burst of traffic that can result in buffer overflow conditions and lost packets. This occurs if the sink rate exceeds the drain rate for a period that exceeds the length of the remaining buffer size, when represented in microseconds.
Tracking PTP errors
Device clocks in IP video networks have no inherent concept of system time, so precision time protocol (PTP) is used to synchronize these clocks. In effect, PTP provides genlock functionality equivalent to that delivered by black burst or tri-level sync in SDI networks. The overall PTP network time server is referred to as a PTP grandmaster. Devices that derive their time from it are called PTP Slaves. PTP grandmasters are usually synchronized to GPS, GLONASS (Global Navigation Satellite System, Russia) or both.
To allow frame accurate switching between SDI and IP derived content, it is essential that the timing of the black burst/tri-level sync is not offset relative to the PTP clock. As shown in Figure 5, this is achieved by measuring the timing offset and then making any necessary correction by skewing the SDI syncs with reference to the PTP clock.
A final consideration
In live production applications, network experts may not be present on the production site and networking equipment may not be in an easily accessible location. It is desirable that network and video engineers be able to control any diagnostic equipment remotely.
An all IP infrastructure is the vision for most broadcasters around the world and is already starting to happen in many facilities. The initial momentum that started with SMPTE ST 2022-6 is likely to be accelerated with the advent of the SMPTE ST 2110 suite of standards. The reality is however that the transition will not happen overnight leading to the need to manage hybrid SDI and IP workflows, and thus a need for IP and video engineers to work closely together to ensure seamless operation and quickly track down faults.
Educated in England where he received an Honors degree in Communications Engineering from the University of Kent, Mike Waidson started his career with a consumer television manufacturer as a research engineer in the digital video department, before moving into the broadcast industry. Mike has over thirty years of experience within the broadcast industry working for various video manufacturers. At Tektronix as an application engineer within the Video Business Division, Mike provides technical support on video measurement products.
You might also like...
Many broadcasters and television networks consider the provision of linear content Over The Top (OTT) for on-demand streaming a necessity. The process, however, is far more complex than preparing a file and sending it to an OTT provider. Several factors…
In the last article on Cloud Broadcasting we looked at integration and how we communicate with SaaS and cloud services in the absence of GPI’s and serial connections. In this article, we introduce secure server access and issues around s…
Troubleshooting IP-centric technologies can be a new challenge for engineers. Often it becomes a case of “You don’t know—what you don’t know,” until it is too late. In addition, once the engineer knows there is a problem,…
Some engineers can maintain their current SDI systems armed with little more than a foggy memory of how things are interconnected. But with IP networks, such a philosophy guarantees panic if something fails. When it comes to properly documenting an…
As broadcasting moves to highly efficient production lines of the future, understanding business needs is key for engineers, and recognizing the commercial motivations of CEOs and business owners is crucial to building a successful media platform.