Production & Post Global Viewpoint – March 2022
Tower Bandwidth

5G is starting to make some real noise in the broadcast industry but delivering the promised gains is more than just about RF bandwidth.
Broadcasters were spoiled in the early days of television signal distribution with guaranteed bandwidth and predictable latency. Telcos, albeit at a massive cost, dedicated entire infrastructures to the delivery of video and audio over dedicated networks. Even when they moved to IP, Telcos used highly optimized and managed networks, and few were able to tell the difference between them and dedicated SDI circuits. The interface was still SDI, but the infrastructure was often IP.
To keep IP as flexible as possible and allow it to be distributed over many different and often unrelated transport streams, the design necessitated a system based on unreliable delivery. This may look like a disaster, but it’s not, as it leaves many more possibilities open for the network operator. A further upside of this type of delivery is that latency is low, or at least as low as it can be for a packet delivery system.
The words “broadcast” and “unreliable” don’t fit together very well, however, SDI networks do suffer from some data loss. There’s no guaranteed delivery. We can detect data corruption through the CRC, but SDI fundamentally doesn’t guarantee absolute delivery, it’s just very reliable. How many engineers measure the error count on every link in their SDI network? We certainly take a closer look at the CRC error count when we start seeing picture disturbance, but it’s unlikely we will ever see a dropped or corrupted pixel.
My point here is that we must take a new look at our attitudes to packet loss. Just as with SDI networks, packet loss in IP networks is inevitable. The major difference is, how much data loss are we willing to accept? We can always improve reliability with TCP, but we do so at the expense of latency!
5G is making some incredible claims such as “1,000 times the bandwidth of 4G” and “ultra-low latency”. But where is this bandwidth manifesting itself? I’m certain that in a well-designed RF network these bandwidth and latency claims are achievable to the 5G tower, but what does concern me is where does the point of demarcation occur? Assuming we’re streaming IP packets back to the broadcast facility then where does the bandwidth and latency get squeezed?
There are many IP tools that allow us to check bandwidth and latency, iPerf being one of them. But even with this we must be careful about what we are measuring. Is it the data throughput of IP or TCP? And how do other users of the network influence our own link? One of the advantages of IP networks is that operators can share the bandwidth over many users hence creating statistical peaks and troughs of bandwidth, and by implication, latency.
I’m sure 5G will deliver massive gains for broadcasters, but as always, it’s the job of the broadcast engineer to cut through the marketing hype and quantify the claims. Understanding SDI networks is child’s play compared to the complexity of the interactions of IP, TCP, WiFi, and 5G.