Understanding IP Broadcast Production Networks: Part 14 - Delay Monitoring

We use buffers to reassemble asynchronous streams so we must measure how long individual packets take to reliably get to the receiver, and the maximum and minimum delay of all packets at the receiver.

Video and audio monitoring in baseband formats is well established for levels, noise, and distortions. Television monitors provide subjective visual checks and objective measurements can be taken using waveform monitors. Audio is similar, loudspeakers and headphones provide subjective checks and PPM’s, VU’s and loudness meters provide objective verification.

IT subjective information consists of determining the user experience; how long does it take for a web page to respond to a mouse click? And how fast will a file transfer? IT networks use packet analysis tools such as Wireshark to look closely at the packets, and IPerf is used to find absolute maximum data rates of network links.

Video and Audio brings a new dimension to monitoring for the IT department.  Not only are we concerned with how to measure the video and audio, we must analyze the time it takes for an IP packet to arrive at a destination, and the variance of all other packets in the stream. If they take too long, then the receiver will drop them from their decoding buffer and cause signal corruption.

High level audio and video monitoring will always be important. Evangelists have often proclaimed that in a digital world we don’t need audio level monitoring as the signals don’t suffer the same distortion and level problems as analog lines. Anybody working at the front end of a broadcast station will tell you the reality is somewhat different.

In the past, broadcast engineers have had the luxury of assuming the underlying network is robust and solid. An SDI distribution system will provide nano-seconds of delay at 3Gbps, and a twisted pair balanced audio system will have similar delays with virtually no dropout.

IP networks are very different. They’re designed with the assumption that there will be packet loss and variable delay. As IP networks are resilient and self-healing, it’s possible and likely that IP packets streamed across a network will take different routes and some won’t get there at all. If a router fails then the resilience in a network will send subsequent IP packets via a different route, often longer than the original. If the first router recovers, then the IP packets could be sent over this shorter link, resulting in packets being received out of sequence.

Figure 1 - Buffers are used as a temporary store to re-sequence packets.

Figure 1 - Buffers are used as a temporary store to re-sequence packets.

Significant variation in transmission of packets occurs due to the queueing that takes place in switches and routers. In integrated IP networks transfer of all kinds of data is taking place, from accounts transactions to office files; video and audio is competing with these to get to their destination.

Receiver buffering is a straightforward way of dealing with delay and sequencing problems. A buffer is a temporary storage area of computer memory where packets are written out of sequence and in varying time. The receiver algorithm reads the packets and pulls them out of the buffer in sequence and presents them to the decoding engine.

Buffers are a trade off between delay and validity of data. The longer the buffer the more likely it is to receive packets that have taken a disproportionate time to travel. However, the read-out algorithm has a delay of the time of the latest packet. In effect, the bigger the buffer, the longer the delay.

Dropped packets are caused either through congestion in a switch or router, or interference on a network cable. Congestion occurs when too many packets arrive at the router’s inputs too quickly and the router cannot respond to them quickly enough, or the egress port becomes oversubscribed. Much processing goes on inside a router or switch, the more features the device provides, the more chance there is off packet loss.

This is one of the reasons IT engineers try and use layer 2 switchers (Ethernet) wherever possible. They use look up tables to decide how to send the frame based on the Ethernet packet header destination address, this is relatively simple and can be achieved in almost real-time using a bitwise comparison in an FPGA (Field Programmable Gate Array).

As a router needs to dig deeper into the Ethernet header or IP packet it requires more processing power and the potential for packet loss increases. This is one of the area’s IT engineers tend to quickly gloss over, working on the assumption that congestion occurs infrequently, and when it does TCP and FTP type protocols will fix the problem as they will resend any lost packets.

Figure 2 -  Computer network interface cards introduce delay and jitter.

Figure 2 - Computer network interface cards introduce delay and jitter.

In broadcast television, we cannot afford to drop even one packet. ST2022-5 incorporates FEC (Forward Error Correction), but this isn’t really designed to take the place of TCP or FTP to fix large error caused by congestion, and relying on it to do so could result in unpredictable results.

Consequently, we are interested in two network measurements; how long individual packets take to reliably get to the receiver, and what is the maximum and minimum delay of all packets at the receiver. On the face of it this sounds like an easy measurement to make using analyzers such as Wireshark. However, PC protocol analyzers rely on receiving data from the NIC (Network Interface Card) and time taken for the operating system to move data from the NIC to the main processor.

NIC’s have built in buffers that are used to receive and transmit data to the Ethernet cable or fiber. For transmission, they provide a temporary store should a collision be detected on the Ethernet link, and the packet needs to be transmitted, and for receiving they hold packets until the processor has time to copy them to main memory and process them.

The buffers and operating system incur further delay into the system and make critical measurement very difficult. We cannot be sure whether we are measuring the time taken through the network, or the time taken to process by the measuring systems OS and NIC. This is one of the occasions where a hardware solution gives consistently better results than software tools.

You might also like...

Designing IP Broadcast Systems: Integrating Cloud Infrastructure

Connecting on-prem broadcast infrastructures to the public cloud leads to a hybrid system which requires reliable secure high value media exchange and delivery.

Video Quality: Part 1 - Video Quality Faces New Challenges In Generative AI Era

In this first in a new series about Video Quality, we look at how the continuing proliferation of User Generated Content has brought new challenges for video quality assurance, with AI in turn helping address some of them. But new…

Minimizing OTT Churn Rates Through Viewer Engagement

A D2C streaming service requires an understanding of satisfaction with the service – the quality of it, the ease of use, the style of use – which requires the right technology and a focused information-gathering approach.

Production Control Room Tools At NAB 2024

As we approach the 2024 NAB Show we discuss the increasing demands placed on production control rooms and their crew, and the technologies coming to market in this key area of live broadcast production.

Designing IP Broadcast Systems: Where Broadcast Meets IT

Broadcast and IT engineers have historically approached their professions from two different places, but as technology is more reliable, they are moving closer.