Understanding IP Broadcast Production Networks: Part 12 - Measuring Line Speeds

Broadcast and IT engineers take very different approaches to network speed and capacity - it is essential to reach a shared understanding.

In the analog days, broadcast engineers used frequency response to measure line capability. An audio twisted pair had a bandwidth of at least 20KHz and video had a bandwidth of approximately 8MHz. As we moved to digital audio AES3 and SDI HD these requirements increased to 3Mb/s and 1.485Gb/s respectively.

Digital audio and video systems send high and low voltages to represent the one’s and zero’s. However, at the higher frequencies we need to take into consideration the analog qualities of the transmission cable and equalizing circuits. Return loss, reflections and phase distortion are all significant factors needing consideration.

When speaking to the IT department, we might think that we should no longer worry about these qualities as signal paths are defined in bits per second. An internet line might be defined as 200Mb/s and an SFP link might be defined as 1Gb/s.

IT engineers tend to be product specialists in their own fields of Microsoft, Linux, and Cisco. Very few of them will study transmission theory in the way broadcast engineers have in the past, especially those who worked in transmitters. This can lead to some very frustrating and confusing conversations between IT and broadcast engineers.

An IT engineer might tell you that the bandwidth of a circuit is 200Mb/s, or the delay is negligible. At this point a broadcast engineers’ blood would boil as they have flashbacks to their Telegrapher’s equations and two port networks. The simple answer is that IT engineers think in terms of service level agreements, if a Telco has provided a circuit with 200Mb/s capacity then they assume and expect it to be true.

IT engineers think of network capacity in terms of bits/second, as opposed to broadcast engineers who think in terms of bandwidth, return loss, phase distortion and reflections. Further problems arise when we start discussing actual data throughput in a system.

Broadcast engineers will assume a 10MHz point to point network will allow them to send signals with 10MHz bandwidths. As IT networks are based on packets of data, there is an inherent loss of data in the system due to the headers and spaced distribution. A 10Mb/s circuit might only have a useful data-rate of 9.5Mbps.

Ethernet frames are generally split into two parts, the header and payload. The header includes information such as send and receive addresses, payload types, packet counts and flags. The payload will be a protocol type such as IP, which will also contain a header and payload.

Drilling down into the Ethernet packet we have the four octets of Cyclic Redundancy Check (CRC), and twelve octets of inter-packet gap appended to the end of the frame.

When discussing networks, engineers use octets instead of bytes to represent eight bits. This is to remove any ambiguity as computer science tells us a byte is “a unit of information”, which is based on the hardware design of the computer, and could be as easily eight bits, ten bits or one hundred bits.

Figure 1 - Ethernet packets have an overhead from the header and CRC reducing data throughput.

Figure 1 - Ethernet packets have an overhead from the header and CRC reducing data throughput.

Our Ethernet frame generally consists of 1,542 octets, or 12,336 bits. Only the payload is available to us when sending audio and video which is 1,530 octets, or 12,240 bits. If a camera uses UDP/IP to send its data over Ethernet a further 20 octets are lost in the Ethernet payload to the UDP header and CRC information leaving 1,510 octets, or 12,080 bits, all resulting in approximately 98% data throughput.

The theoretical maximum of 98% assumes no congestion. The IT engineer is expecting 200Mb/s, but the Telco is providing 196Mb/s of usable data. Clearly the Telco will dispute this as they will show they are providing a 200Mb/s circuit, and the fact that you are using 2% of it on packet header information is your problem not theirs. To the letter of the contract, they are probably correct.

This may not sound a lot, but if you suddenly find your network has a 2% reduction in capacity, you could find yourself with some tough questions to answer when approaching the Finance Director for more cash.

When analyzing network throughput, a thorough understanding of protocols must be achieved.

Transmission Control Protocol (TCP) sits on top of IP/Ethernet and provides a reliable connection-based system to allow two devices to exchange data. Low latency video and audio distribution doesn’t use TCP/IP for standards such as ST2110; its data integrity is very high, but the throughput can be low and highly variable. However, some video and audio distribution systems will use TCP/IP and understanding how it works is critical when considering data throughput and latency.

TCP works by sending a group of packets and then waits for an acknowledge packet from the receiver, if the “Ack” isn’t received then the same packets are resent, eventually timing-out and sending an error to the user if too many of these errors happen in succession. If the “Ack” is received by the sender, then the next group of packets are sent. Data throughput is reduced and is the price we pay for this data guarantee.

UDP/IP protocols, which ST-2022 and ST2110 uses to transport video and audio, is a “fire and forget” system. The camera outputs the packets and doesn’t use any form of testing to see if it was received at the destination, such as a production switcher. ST-2022-5 has forward error correction (FEC) built into it to provide some resilience over the network.

Figure 2 - IPerf line speed testing.

Figure 2 - IPerf line speed testing.

“IPerf” is used to actively measure the maximum achievable bandwidth in an IP network, it’s the closest tool we have for measuring a networks capacity and is released under the BSD license. It runs on two computers, a transmit at one end of the network and receive at the other. It works by flooding the link with IP datagrams and using the receiver to check their validity, consequently the measurement is potentially destructive to other users of the network and must be used in isolation.

Operating as a command line tool IPerf can make many different measurements, from UDP/IP line-speed to TCP throughput. TCP measurements will always be slower as IPerf will be measuring the speed of the protocol, not the network line-speed.

When working with the IT department great care must be taken in understanding what exactly you are measuring.

You might also like...

Why AI Won’t Roll Out In Broadcasting As Quickly As You’d Think

We’ve all witnessed its phenomenal growth recently. The question is: how do we manage the process of adopting and adjusting to AI in the broadcasting industry? This article is more about our approach than specific examples of AI integration;…

Designing IP Broadcast Systems: Integrating Cloud Infrastructure

Connecting on-prem broadcast infrastructures to the public cloud leads to a hybrid system which requires reliable secure high value media exchange and delivery.

Video Quality: Part 1 - Video Quality Faces New Challenges In Generative AI Era

In this first in a new series about Video Quality, we look at how the continuing proliferation of User Generated Content has brought new challenges for video quality assurance, with AI in turn helping address some of them. But new…

Minimizing OTT Churn Rates Through Viewer Engagement

A D2C streaming service requires an understanding of satisfaction with the service – the quality of it, the ease of use, the style of use – which requires the right technology and a focused information-gathering approach.

Production Control Room Tools At NAB 2024

As we approach the 2024 NAB Show we discuss the increasing demands placed on production control rooms and their crew, and the technologies coming to market in this key area of live broadcast production.