Understanding IP Broadcast Production Networks: Part 12 - Measuring Line Speeds
Broadcast and IT engineers take very different approaches to network speed and capacity - it is essential to reach a shared understanding.
All 14 articles in this series are now available in our free 62 page eBook ‘Understanding IP Broadcast Production Networks’ – download it HERE.
All articles are also available individually:
In the analog days, broadcast engineers used frequency response to measure line capability. An audio twisted pair had a bandwidth of at least 20KHz and video had a bandwidth of approximately 8MHz. As we moved to digital audio AES3 and SDI HD these requirements increased to 3Mb/s and 1.485Gb/s respectively.
Digital audio and video systems send high and low voltages to represent the one’s and zero’s. However, at the higher frequencies we need to take into consideration the analog qualities of the transmission cable and equalizing circuits. Return loss, reflections and phase distortion are all significant factors needing consideration.
When speaking to the IT department, we might think that we should no longer worry about these qualities as signal paths are defined in bits per second. An internet line might be defined as 200Mb/s and an SFP link might be defined as 1Gb/s.
IT engineers tend to be product specialists in their own fields of Microsoft, Linux, and Cisco. Very few of them will study transmission theory in the way broadcast engineers have in the past, especially those who worked in transmitters. This can lead to some very frustrating and confusing conversations between IT and broadcast engineers.
An IT engineer might tell you that the bandwidth of a circuit is 200Mb/s, or the delay is negligible. At this point a broadcast engineers’ blood would boil as they have flashbacks to their Telegrapher’s equations and two port networks. The simple answer is that IT engineers think in terms of service level agreements, if a Telco has provided a circuit with 200Mb/s capacity then they assume and expect it to be true.
IT engineers think of network capacity in terms of bits/second, as opposed to broadcast engineers who think in terms of bandwidth, return loss, phase distortion and reflections. Further problems arise when we start discussing actual data throughput in a system.
Broadcast engineers will assume a 10MHz point to point network will allow them to send signals with 10MHz bandwidths. As IT networks are based on packets of data, there is an inherent loss of data in the system due to the headers and spaced distribution. A 10Mb/s circuit might only have a useful data-rate of 9.5Mbps.
Ethernet frames are generally split into two parts, the header and payload. The header includes information such as send and receive addresses, payload types, packet counts and flags. The payload will be a protocol type such as IP, which will also contain a header and payload.
Drilling down into the Ethernet packet we have the four octets of Cyclic Redundancy Check (CRC), and twelve octets of inter-packet gap appended to the end of the frame.
When discussing networks, engineers use octets instead of bytes to represent eight bits. This is to remove any ambiguity as computer science tells us a byte is “a unit of information”, which is based on the hardware design of the computer, and could be as easily eight bits, ten bits or one hundred bits.
Our Ethernet frame generally consists of 1,542 octets, or 12,336 bits. Only the payload is available to us when sending audio and video which is 1,530 octets, or 12,240 bits. If a camera uses UDP/IP to send its data over Ethernet a further 20 octets are lost in the Ethernet payload to the UDP header and CRC information leaving 1,510 octets, or 12,080 bits, all resulting in approximately 98% data throughput.
The theoretical maximum of 98% assumes no congestion. The IT engineer is expecting 200Mb/s, but the Telco is providing 196Mb/s of usable data. Clearly the Telco will dispute this as they will show they are providing a 200Mb/s circuit, and the fact that you are using 2% of it on packet header information is your problem not theirs. To the letter of the contract, they are probably correct.
This may not sound a lot, but if you suddenly find your network has a 2% reduction in capacity, you could find yourself with some tough questions to answer when approaching the Finance Director for more cash.
When analyzing network throughput, a thorough understanding of protocols must be achieved.
Transmission Control Protocol (TCP) sits on top of IP/Ethernet and provides a reliable connection-based system to allow two devices to exchange data. Low latency video and audio distribution doesn’t use TCP/IP for standards such as ST2110; its data integrity is very high, but the throughput can be low and highly variable. However, some video and audio distribution systems will use TCP/IP and understanding how it works is critical when considering data throughput and latency.
TCP works by sending a group of packets and then waits for an acknowledge packet from the receiver, if the “Ack” isn’t received then the same packets are resent, eventually timing-out and sending an error to the user if too many of these errors happen in succession. If the “Ack” is received by the sender, then the next group of packets are sent. Data throughput is reduced and is the price we pay for this data guarantee.
UDP/IP protocols, which ST-2022 and ST2110 uses to transport video and audio, is a “fire and forget” system. The camera outputs the packets and doesn’t use any form of testing to see if it was received at the destination, such as a production switcher. ST-2022-5 has forward error correction (FEC) built into it to provide some resilience over the network.
“IPerf” is used to actively measure the maximum achievable bandwidth in an IP network, it’s the closest tool we have for measuring a networks capacity and is released under the BSD license. It runs on two computers, a transmit at one end of the network and receive at the other. It works by flooding the link with IP datagrams and using the receiver to check their validity, consequently the measurement is potentially destructive to other users of the network and must be used in isolation.
Operating as a command line tool IPerf can make many different measurements, from UDP/IP line-speed to TCP throughput. TCP measurements will always be slower as IPerf will be measuring the speed of the protocol, not the network line-speed.
When working with the IT department great care must be taken in understanding what exactly you are measuring.
You might also like...
NDI For Broadcast: Part 1 – What Is NDI?
This is the first of a series of three articles which examine and discuss NDI and its place in broadcast infrastructure.
Brazil Adopts ATSC 3.0 For NextGen TV Physical Layer
The decision by Brazil’s SBTVD Forum to recommend ATSC 3.0 as the physical layer of its TV 3.0 standard after field testing is a particular blow to Japan’s ISDB-T, because that was the incumbent digital terrestrial platform in the country. C…
Designing IP Broadcast Systems: System Monitoring
Monitoring is at the core of any broadcast facility, but as IP continues to play a more important role, the need to progress beyond video and audio signal monitoring is becoming increasingly important.
Broadcasting Innovations At Paris 2024 Olympic Games
France Télévisions was the standout video service performer at the 2024 Paris Summer Olympics, with a collection of technical deployments that secured the EBU’s Excellence in Media Award for innovations enabled by application of cloud-based IP production.
HDR & WCG For Broadcast - Expanding Acquisition Capabilities With HDR & WCG
HDR & WCG do present new requirements for vision engineers, but the fundamental principles described here remain familiar and easily manageable.