Live Sports Production: Latency, Monitoring & The Future

Latency has always been a fact of life in broadcast infrastructure. It is something for operators to learn to adjust to and for engineering to effectively manage… but is it more of a challenge in remote production models?

All forms of broadcast production have latency. Encoders, transcoders and processors all inherently add latency. The propagation of signals over networks also inherently adds latency. What really matters is whether the amount of latency in any given production system is manageable within the reasonable confines of human operation. Time to talk to our day-to-day managers of live sports production systems about how much is too much, and the differences between the different production models.

Latency

The Broadcast Bridge: Does latency still have an impact? Is it all solved? Is the connectivity now good enough that the latency issues have all gone away, or is that still something you have to think hard about?

Damien Hesse. NEP Americas: “Latency is still something we’re always monitoring, managing, and improving. With a switcher, there’s only so many milliseconds of delay you can have before the T bar doesn’t behave the right way, and the effects aren’t cutting and the status isn’t working correctly. When a link comes up we have to make sure that latency isn’t going to become a challenge.”

The Broadcast Bridge: Is latency a bigger challenge with remote?

Dafydd Rees. NEP UK: “It certainly can be. One example is a racing series we’ve previously supported in desert locations across the world, where everything was delivered by satellite. It was a completely remote production. Some of it was hybrid remote, but everything was delivered by satellite because there wasn’t connectivity in the places we were capturing content. The immediate thought was, it’s going to be at least two seconds before it gets from site to production. It depends how much of a relationship you have between the production site and the remote site. If you’re accepting those feeds from the remote site and you’re doing a cut in London, for example, as long as everything arrives concurrently it doesn’t matter whether it’s three milliseconds later or twenty seconds later, you’re still doing the cut based on what’s in front of you.”

“The complication comes when you try to do a two way with somebody onsite or have a conversation with the camera ops, because you can only direct them on what’s in front of you, and if that’s twenty seconds later, then that becomes a challenge. Then you get to the point where you’re building trust between yourself in one location and the team in the other, that everyone is working as one. Even though there’s latency between the two places, there’s an open flow and you know what to expect. There are ways around whether it’s a 20-millisecond delay or one second.  It’s manageable and a lot of that comes down to the compromise between link speed, cost, bandwidth, availability, and location. If you’re on a fast, connected network and you can run JPGXS or something quick, then the whole question goes away, because even across the globe, you are only a couple of hundred milliseconds apart.”

The Broadcast Bridge: That works for the human communication. Does that also work for camera control, or does the shading still need to basically be on site and close to the camera?

Dafydd Rees. NEP UK: “We are, on the whole, still doing shading locally, but that isn’t technology-driven. That’s a resource thing in the sense that we’re still sending people to site to rig monitors, to rig the OB, and run the cables. If the crew is there, they may as well rack the show while they’re there. But for some shows it is achievable. Shading is interesting because many shaders would argue that they can’t be expected to shade with ten milliseconds of latency. But as an ex-vision engineer myself, I question the idea that you react within ten milliseconds when the sun goes behind the cloud. There’s a threshold at which it becomes really difficult, but I think for most examples with 500 milliseconds of latency, remote shading should be manageable.”

”One benefit that is talked about for remote racking is that if I have somebody in the building, they can rack more than one football matche in one day and that’s where you drive the efficiency. It doesn’t always play out that way because with football matches there’s not always a clean break between the end of one and the beginning of another. The workflow is not entirely technology driven, there are other elements to it as well.”

Application Control Latency

The Broadcast Bridge: With hybrid models when you have processing resource in one place and control surfaces in another does that create its own latency issues or has the connectivity become so good that it doesn’t matter anymore?

Dafydd Rees. NEP UK: “It largely depends on the connectivity you have available to you. A lot of what we do is dedicated managed connectivity, so we can be confident about the route that it’s taking and we can be certain about the roundtrip latency for all of the different elements. You have to be mindful of that latency to be sure it’s not beyond a certain threshold. There are different ways that you can do that. There’s always an encoding balance. If you have time, you can use a more efficient algorithm that takes longer to do things, but if you start creeping up against a long roundtrip latency, you might mitigate against that by having a faster codec in the first place, like JPGXS for example. You’re looking at encoding bandwidth, latency, availability, and cost.”

Buffers

The Broadcast Bridge: How are you handling that control latency then when you need to compensate? Is it buffers in the system?

Damien Hesse. NEP Americas: “Yes there are buffers in the system from a control perspective. It depends on the link. Sometimes it’s not fast enough, so we try to minimize the number of hops the link takes. If it’s going over fiber we will reach out to the fiber provider, and they will look at the pathing and try to re-route things and we’ll try to get it down to where it needs to be. But it is a challenge, especially for control. Once you start hitting around 150 milliseconds, that is as latent as things can be.”

Monitoring & Comms

The Broadcast Bridge: In remote models, program return paths, localized audio monitoring and comms are all equally critical and equally sensitive to latency. Are these sharing the same infrastructure as the contribution feeds etc.?

Damien Hesse. NEP Americas: “In REMI workflows, the encode path will send things to where it’s going and then the returns come back on that same path. For intercom, we use a vendor that uses its own proprietary format, but it is IP and shares the same infrastructure as everything else. There are tallies that come back over that data path, returns from multiviewers and programing that will all come back on that same path. It’s a two-way street.”

The Broadcast Bridge: Audio gets its own articles in the next part of our Live Sports Production series, but it has come up in conversation throughout with a simple acknowledgement, that in all production models there is a requirement for a localized audio system to QA feeds, handle monitoring and comms control at the venue because of latency.

Dafydd Rees. NEP UK: “Comms is critical. The ability to talk between crew members, between production and cameras, and between engineering and production is vital. When you lift and shift the production element from one end to the other, you create more routes between the end points. For us, that is IP and it shares the same contribution philosophy as all the other video feeds. Increasingly, it’s becoming part of the IP stream from one end to the other, either as 2110 flows or another IP flow. The principle is the same regardless, the building blocks are the same regardless, but you are moving some of those blocks from place to place and creating different pipework between them.”

Future Change

The Broadcast Bridge: It is always interesting to hear different perspectives on what people in the industry see coming down the technological line so we asked our contributors what they see coming next and how it might affect what they do?

Damien Hesse. NEP Americas: “As the industry does more work in the cloud and continues to innovate software-based solutions, user interaction remains important. People still want to hit buttons, they still want to feel tactile control. Yes, as an industry we’re doing great work with software now, but we still use good hardware that users can interact with. Our TFC broadcast orchestration platform is a leader in bridging hardware and software, and our teams are doing tremendous work and innovations in this area.”

Dafydd Rees. NEP UK: “We’re seeing the shift from dedicated hardware to software-based solutions in broadcast. There’s a lot there that’s moving at pace. It’s incredible how quickly it is picking up speed. That’s a big shift, and our teams are on the cutting edige of this with TFC, our broadcast orchestration platform.”

The Broadcast Bridge: With the data center model, with any full remote or hybrid remote, there’s stuff that’s in a data center or off in a machine room somewhere already. With the transition to software on COTS all you’re really doing is stripping out one type of resource for another, so from a deployment perspective, the change is potentially more manageable, except perhaps a need for a different set of IT skills?

Dafydd Rees. NEP UK: “I think it’s incumbent on us all to make sure that we are engaging with the entire engineering community and empowering and enabling them to learn the new skills they need to adapt to evolving technologies. This is an important initiative internally at NEP, and for the industry at large.”

Supported by

You might also like...

Building Software Defined Infrastructure: Effective API’s

Examples from IT and gaming show that the reliable exchange of data between applications from different vendors, often comes from commercial collaboration around establishment of clearly defined protocols.

Monitoring & Compliance In Broadcast: Monitoring Compute Systems

With the ongoing evolution from dedicated hardware towards software running on COTS and cloud-compute infrastructure, monitoring compute resource is vital.

Production Delivery Specifications - The Broadcast Standards Essential Guide

This Essential Guide provides a unique reference resource for production companies or teams preparing to package and deliver assets to broadcasters & streamers. It gathers the published content delivery specifications from the DPP, Netflix, Apple TV+, NABA, The BBC and…

IP Monitoring & Diagnostics With Command Line Tools: Part 9 - Continuous Monitoring

Scheduling a continuous monitoring process will detect problems at the earliest opportunity. If the diagnostic tools run often enough, they can forecast a server outage before a mission critical failure happens. Pre-emptive diagnosis and automatic corrections are a very good…

Navigating Streaming Networks For Live Sports: Broadcaster OTT & Streaming Delivery Networks

With the ongoing growth of OTT content consumption, and the drive from live sports broadcasters to provide high-scale and high-quality Direct to Consumer OTT services, Streamers and their customers now demand streaming services that operate at the scale and quality…