We use buffers to reassemble asynchronous streams so we must measure how long individual packets take to reliably get to the receiver, and the maximum and minimum delay of all packets at the receiver.
All articles are also available individually:
Video and audio monitoring in baseband formats is well established for levels, noise, and distortions. Television monitors provide subjective visual checks and objective measurements can be taken using waveform monitors. Audio is similar, loudspeakers and headphones provide subjective checks and PPM’s, VU’s and loudness meters provide objective verification.
IT subjective information consists of determining the user experience; how long does it take for a web page to respond to a mouse click? And how fast will a file transfer? IT networks use packet analysis tools such as Wireshark to look closely at the packets, and IPerf is used to find absolute maximum data rates of network links.
Video and Audio brings a new dimension to monitoring for the IT department. Not only are we concerned with how to measure the video and audio, we must analyze the time it takes for an IP packet to arrive at a destination, and the variance of all other packets in the stream. If they take too long, then the receiver will drop them from their decoding buffer and cause signal corruption.
High level audio and video monitoring will always be important. Evangelists have often proclaimed that in a digital world we don’t need audio level monitoring as the signals don’t suffer the same distortion and level problems as analog lines. Anybody working at the front end of a broadcast station will tell you the reality is somewhat different.
In the past, broadcast engineers have had the luxury of assuming the underlying network is robust and solid. An SDI distribution system will provide nano-seconds of delay at 3Gbps, and a twisted pair balanced audio system will have similar delays with virtually no dropout.
IP networks are very different. They’re designed with the assumption that there will be packet loss and variable delay. As IP networks are resilient and self-healing, it’s possible and likely that IP packets streamed across a network will take different routes and some won’t get there at all. If a router fails then the resilience in a network will send subsequent IP packets via a different route, often longer than the original. If the first router recovers, then the IP packets could be sent over this shorter link, resulting in packets being received out of sequence.
Significant variation in transmission of packets occurs due to the queueing that takes place in switches and routers. In integrated IP networks transfer of all kinds of data is taking place, from accounts transactions to office files; video and audio is competing with these to get to their destination.
Receiver buffering is a straightforward way of dealing with delay and sequencing problems. A buffer is a temporary storage area of computer memory where packets are written out of sequence and in varying time. The receiver algorithm reads the packets and pulls them out of the buffer in sequence and presents them to the decoding engine.
Buffers are a trade off between delay and validity of data. The longer the buffer the more likely it is to receive packets that have taken a disproportionate time to travel. However, the read-out algorithm has a delay of the time of the latest packet. In effect, the bigger the buffer, the longer the delay.
Dropped packets are caused either through congestion in a switch or router, or interference on a network cable. Congestion occurs when too many packets arrive at the router’s inputs too quickly and the router cannot respond to them quickly enough, or the egress port becomes oversubscribed. Much processing goes on inside a router or switch, the more features the device provides, the more chance there is off packet loss.
This is one of the reasons IT engineers try and use layer 2 switchers (Ethernet) wherever possible. They use look up tables to decide how to send the frame based on the Ethernet packet header destination address, this is relatively simple and can be achieved in almost real-time using a bitwise comparison in an FPGA (Field Programmable Gate Array).
As a router needs to dig deeper into the Ethernet header or IP packet it requires more processing power and the potential for packet loss increases. This is one of the area’s IT engineers tend to quickly gloss over, working on the assumption that congestion occurs infrequently, and when it does TCP and FTP type protocols will fix the problem as they will resend any lost packets.
In broadcast television, we cannot afford to drop even one packet. ST2022-5 incorporates FEC (Forward Error Correction), but this isn’t really designed to take the place of TCP or FTP to fix large error caused by congestion, and relying on it to do so could result in unpredictable results.
Consequently, we are interested in two network measurements; how long individual packets take to reliably get to the receiver, and what is the maximum and minimum delay of all packets at the receiver. On the face of it this sounds like an easy measurement to make using analyzers such as Wireshark. However, PC protocol analyzers rely on receiving data from the NIC (Network Interface Card) and time taken for the operating system to move data from the NIC to the main processor.
NIC’s have built in buffers that are used to receive and transmit data to the Ethernet cable or fiber. For transmission, they provide a temporary store should a collision be detected on the Ethernet link, and the packet needs to be transmitted, and for receiving they hold packets until the processor has time to copy them to main memory and process them.
The buffers and operating system incur further delay into the system and make critical measurement very difficult. We cannot be sure whether we are measuring the time taken through the network, or the time taken to process by the measuring systems OS and NIC. This is one of the occasions where a hardware solution gives consistently better results than software tools.
You might also like...
CDNs are much more than just high-speed links between Origins and ISP (Internet Service Provider) networks. Instead, they form a complete ecosystem of storage and processing, and they create new possibilities for highly efficient streaming at scale that will likely…
Part 7 of The Big Guide To OTT is a set of three articles which examine the pivotal role of CDN’s, how they are evolving and how Open Caching aims to support broadcast grade streaming.
Scalable Dynamic Software For Broadcasters is a free 88 page eBook containing a collection of 12 articles which give a detailed explanation of the principles, terminology and technology required to leverage microservices based, software only broadcast production infrastructure.
Understanding IP Broadcast Production Networks is a free 62 page eBook containing a collection of 14 articles which provide the basic building blocks of knowledge required to understand how IP networks actually work in the context of broadcast production systems.
To be useful for information purposes, electromagnetic radiation needs antennas. It’s a large subject with many specialist areas.