Live Sports Production: Backhaul In Live Sports Production

Getting content reliably and securely from venue to studio remains key to live sports production so here we discuss the technology and services required.
Regardless of the selected approach to live sports production, whether full OB, full remote or a hybrid remote variant is adopted, the perennial challenge of transporting content from the venue to the studio and establishing a reliable return path remains fundamental to success.
In our conversations preparing this series, unsurprisingly, it rapidly became apparent that the manner of transport for backhaul: satellite, leased line, public internet or cellular is viewed by production teams more as a ‘service’ that interconnects parts of the production infrastructure, rather than an element of the production system that the engineering team must directly ‘own’ and guarantee.
Content is captured on site, encoded, and passed to a service provider. The service provider delivers it to a destination (be that a studio data center or a public cloud server) and from there the broadcaster distributes it as required by production or delivery.
The backhaul service is negotiated between the broadcaster and service providers and is subject to some form of SLA. If something goes wrong it is the service provider(s) who, in the event of performance challenges, must diagnose and take responsibility for the service.
The transport method selected is determined by a combination of requirement, availability and cost. ‘Requirement’ often being the thing that gets squeezed when availability or cost become a restrictive factor. The nature of this perception is perhaps determined by the notion that whilst in transit between venue and destination, the broadcaster usually has little or no control or even insight into the paths taken or the physical technology used.
Calculating required network bandwidth is a matter of mathematics: To transmit full uncompressed video requires a bit rate ranging from around 2 Gbps for 8-bit color depth with 4:2:2 chroma subsampling, up to as much as 8 Gbps for RGBA color at 16 bits per channel. 40 uncompressed camera feeds for a high-profile soccer game might therefore require between 80 Gbps and 320 Gbps of bandwidth. This might work for an on-site contribution network, or a production network within a facility or truck, given sufficient switch capacity, but probably not for backhaul in most circumstances, so some sort of compression before feeding into backhaul is almost always required. This compression inevitably adds some degree of computational latency.
Compression is typically achieved using high density hardware encoders located on-site in a flypack, rack, OB etc. As was discussed in our Where’s The Compute? article in Part 1 of this series, FPGA based dedicated hardware based compression is widely favored over using COTS equivalents because of the density of compression per RU of space along with power consumption, heat and weight per RU of space. As CPU and GPU technologies evolve this may change.
When it comes to selecting a compression codec there are a number of options but JPEG XS (ISO/IEC 21122) has become something of a de-facto standard because it delivers up to 10:1 compression without noticeable degradation in quality and has been extensively used in live sports production.
Faith in the visual performance of JPEG XS has reached the point where most of our contributors referred to it as ‘lossless’ at some point in the conversation – which they all know is not technically true. JPEG XS is subjectively visually transparent in that it is difficult to see the difference between uncompressed and JPEG XS compressed video at ratios of up to 10:1. The laws of physics insist that some loss has occurred. JPEG XS is also highly efficient and introduces between 1 and 32 lines of latency depending on settings, when compared to uncompressed video.
Here are some selected comments from our discussion of; connectivity, compressed Vs uncompressed workflows, bandwidth requirements and compression codecs in live sports infrastructures.
John Guntenaar, Chief Technology Officer, NEP Europe: “There are a lot of locations where there is a connectivity provider active, such as NEP Connect, for instance, where we could be backhauling signals for REMI on SDI, over a managed network service or leased line that has a very high level of predictability of performance. In other locations we might have a 10 or 100 gig network connectivity via an internet provider back to our data center. For a connected production, where we have a hybrid setup, or where we are doing a fully remote production, we normally have a 10 or 100 gig network. More often, we have a 10 gig network at the location, so we will be using something like JPEG XS compression. In other locations where we have lower bandwidth available, we could use more lossy encoding and something like SRT or other options. This varies depending on the location.”
Dan Turk, Chief Technology Officer, NEP Americas.: “It completely depends on the quality and what you’re trying to do. In a full REMI production, when we’re sending all of the on-air cameras back, we’ve settled on JPEG XS for now. If you’re using every single camera, that is going to require a lot of encoding onsite. For now, JPEG XS is our high-end lossless compression but there are some huge strides going on based on HEVC with a lot of good codecs and new ways people are doing it, so that’s a moving target right now. I have seen one new proprietary codec that is delivering beyond 10:1 and you can take two high end master grade monitors and put JPEG XS in one and the proprietary codec on the other, and it’s very hard to see the difference.”
Broadcast Bridge: HEVC is the next generation evolution of AVC (h264). It brings a number of improvements that reach far beyond compression and you can read all about it in this article within our Broadcast Standards book.
Dan Turk, NEP Americas: “A 10 gig circuit is pretty easy to obtain. The holy grail for future compression would be a one gig circuit. If we could do this on a one gig connection, we could go anywhere and do anything. A 10 gig is relatively straightforward, but when you get above 10 gigs it becomes something you need to plan more carefully for, and perhaps it needs to be a facility with connectivity. If you look at the bandwidth needed for JPEG XS, it gets pretty big, fairly quickly.”
Rainer Kampe. CTO at Broadcast Solutions: “If the transport service is clear and the bandwidth is clear, and you have enough bandwidth for JPEG XS then we would go JPEG XS. If we need to reduce further, then it’s typically SRT based H264. Obviously SRT is just the transport protocol and it doesn’t define the compression codec.
Broadcast Bridge: The casual reference to using H.264 (AVC) belies the complexity and depth of the MPEG-4 Part 10 standard. It can be applied in a wide variety of ways. You can read a detailed exploration of it in this article within our Broadcast Standards book.
All of our contributors make reference to using SRT or RIST for backhaul when using internet services. Both RIST and SRT are proven and widely deployed protocols that offer broadcasters a means to achieve reliable delivery over internet connections which are inherently contended, and thus potentially lossy. There are some things to be mindful of in the use of these protocols which are discussed within our recent ‘Ground To Cloud’ article within the ‘Building Software Defined Infrastructure’ series.
Patrick Daly. VP Media Innovations at Diversified: “Thankfully much like compute, connectivity is on a curve as well. The cost is going down. The capability is going up over time. While that’s true on average, where exactly we’re talking about matters. It matters for more than just sports network distribution, the ability to send your content into South America for example is often what keeps people on satellite.”
“I think one of the more interesting challenges is around the sports betting market and addressing the concerns for that community. There’s a consumer expectation that everyone is watching the game at the same time, but those of us in the know realize that player latency is different in some delivery methods. We need to ensure that for content feeds and the data feeds, we’re accommodating a minimum and uniform latency delivery to those participants. Then we have to consider that some of those participants may be in venue, and we have to be sensitive to our worldwide distribution. How far off of real time are those feeds? That’s probably one of the more interesting sets of challenges, and it kind of folds in with the operational challenges of having remote operators and a centralized tech stack. If my operator is looking at a frame on their screen and hits take, is it really going to take at that frame? That’s a really hard problem to solve. There are some solution providers that think about those types of challenges, and some who are just starting to realize that’s a problem. Capturing the entire spectrum of requirements is important.”
Part of a series supported by
You might also like...
Local TV In The U.S.A – 1967 Style
Our very own TV pioneer shares recollections of local TV in the US from his start in 1967.
Monitoring & Compliance In Broadcast: Monitoring Delivery In The Converged OTA – OTT Ecosystem
Convergence or coexistence between linear broadcast, IP based delivery and 5G mobile networks creates new challenges for monitoring of delivery paths, both technically and logistically.
Live Sports Production: Broadcast Controllers & Orchestration In Live Sports Systems
As production infrastructure, processing resources and the underlying networks required become ever more complex, powerful tools are required to plan, deploy and monitor.
Monitoring & Compliance In Broadcast: Monitoring The Media Supply Chain
Why monitoring the multi-format delivery ecosystem starts with a holistic approach to the entire media supply chain.
Live Sports Production: Sports Production Network Infrastructure
A discussion of production network infrastructure and where the industry is in the evolutionary journey from SDI to IP with senior system architects within three of the most respected organizations in broadcast.