IP networks must avoid excessive traffic peaks that can cause buffer over-flow and degrade performance. Proactive prevention is key.
Migration towards ST 2110 and ST 2022-6 video networks for production and content delivery is picking up pace as the advantages of IP versus traditional SDI over coaxial cable carriage become more evident. The key drivers of IP include the introduction of more flexible and scalable business models based on virtualization and cloud technologies, along with the economies of scale and speed of technology development that stem from the use of commercial off-the-shelf (COTS) IT equipment.
While these benefits are compelling, the migration to IP video networks nevertheless poses significant technical challenges for broadcast engineers. While SDI over coaxial cable was designed as a dedicated link for synchronous, point-to-point delivery of constant high bitrate video, IP infrastructures are typically asynchronous in nature, and this characteristic presents major issues for real-time video delivery due to the potential for network congestion, latency, and jitter.
Sources of video network congestion
To achieve a high Quality of Service (QoS) with IP video, the network traffic flow should avoid excessive peaks that can cause over-flowing of switch buffers. In reality, the inherent burstiness of IP networks plus bandwidth constraints can result in unmanaged traffic levels, which can create packet congestion and latency as router ports become blocked due to buffer exhaustion. This type of packet congestion can be exacerbated in multi-hop infrastructures, with the different paths taken by signals potentially causing further variations in network latency.
These sources of network congestion and latency will delay the arrival of video packets and, in turn, potentially lead to significant jitter problems. In general terms, jitter is a deviation in signal periodicity. In the case of an IP video signal, jitter is a deviation from the expected packet arrival periodicity. Excessive deviations in Packet Interval Time (PIT) — also known as Inter Packet Arrival Time (IPAT) — can lead to packets being stalled, and to loss of packets at the receiver.
Ultimately, if it is not addressed, jitter can seriously impact QoS for broadcasters. This is particularly true for a low-latency system that requires a small receiver buffer size. Therefore, in broadcast video networks, it is vital to ensure that excessive deviation past the expected interval is not occurring, as this risks stalling the signal (due to receiver de-jitter buffer underflow). Broadcasters also must prevent too many packets from arriving with smaller-than-expected intervals, as this can overflow the receiver de-jitter buffer and lead to packet loss.
Both excessive deviation and packet overflow lead to video impairment and, in extreme cases, a loss of the video signal. However, with the ability to monitor and diagnose network congestion, along with associated jitter problems, broadcasters can maintain a healthy video network that supports reliable video delivery.
Network congestion monitoring and diagnosis
Jitter can be measured through observation of variations in the Packet Interval Time (PIT). Analysis of the PIT distribution of a video signal will provide an indication of its health, and warn the engineer of any broadcast critical network congestion.
By plotting a PIT histogram, the broadcast engineer can gain a real-time view of how network congestion is affecting a video signal. Measurement of the PIT mean, as well as minimum and maximum values, offers instant network analysis at-a glance.
In a “perfect” network, a video signal would have constant periodicity, without jitter, and all PIT values would be the same. In a network with very low jitter, the engineer would expect to see a normal distribution, with the vast majority of PIT values in and around the signal period (the expected interval arrival time). However, the reality of congestion in networks typically yields a broader distribution of PIT values around the expected nominal value.
Hence, a healthy video signal will have a distribution peak centred around the expected PIT. Due to the individual characteristics of a network, some significant jitter might be tolerable, but a high occurrence of jitter at the extremes would potentially lead to video signal impairment or loss. An impaired video signal will have a packet distribution characterised by a high occurrence of extremely long or short PIT values and/or by a distribution mean different from the expected signal period.
In addition to performing real-time jitter measurements, the engineer can track PIT variance over time to gain a longer-term monitoring perspective. Logging this data can provide vital information on the health of a network. For instance, a deterioration could be indicated by increased maximum PIT and a steadily rising mean. A PIT logging tool can also provide historical information on network congestion health at the time of an on-air incident.
However, it’s not enough to analyse a video network when there’s a problem. Broadcast engineers need to stress test their facility as their IP network evolves, and as new devices are added. A packet profile generator tool allows an engineer to analyse the video network for vulnerability to congestion and jitter by stress-testing the response of the facility to IP video signals transmitted under a variety of network conditions. The packet profile generator can flag network congestion issues before they become a real problem.
A packet profile generator displays a histogram showing the generated signal’s PIT. With this information, it is possible to adjust the timing to simulate network-introduced packet interval timing jitter. The engineer can use this capability to create custom profiles for testing and then also save network distribution profiles for rapid re-use at a later time. In conjunction with IP video packet analysis tools, the packet profile generator provides a powerful capability for network stress testing and fault diagnosis.
IP video networks have created a new set of test and measurement challenges for broadcast engineers, especially with respect to avoiding network congestion. However, new IP signal generation, analysis and monitoring tools simplify traffic analysis and network testing, thereby empowering broadcasters to avoid serious jitter issues that can jeopardise broadcast Quality of Service.
Neil Sharpe is Head of Marketing for PHABRIX.
You might also like...
Broadcasting used to be simple. It required one TV station sending one signal to multiple viewers. Everyone received the same imagery at the same time. That was easy.
Saving dollars is one of the reasons broadcasters are moving to IP. Network speeds have now reached a level where real-time video and audio distribution is a realistic option. Taking this technology to another level, Rohde and Schwarz demonstrate in…
In principle, IP systems for broadcasting should not differ from those for IT. However, as we have seen in the previous nineteen articles in this series, reliably distributing video and audio is highly reliant on accurate timing. In this article,…
As broadcasters accelerate IP migration we must move from a position of theory to that of practical application. Whether we’re building a greenfield site or transitioning through a hybrid solution, simply changing SDI components with analogous IP replacements will n…
Moving from the luxury of dedicated point-to-point connectivity in favor of asynchronous, shared, and unpredictable IP networks may seem like we’re making life unnecessarily difficult for ourselves. However, there are compelling reasons to make the transition to IP. In t…