Latency Remains Thorn In Side Of Live Sports Remote Production

After years of trial and error designed to reduce operating cost and (more recently) keep crews safely distanced, remote production has found its niche in live production and will remain the de facto method for producing events over a distributed network infrastructure. However, a big hurdle left to overcome for successful deployment of such networked workflows is latency. In live production, video latency refers to the amount of time it takes for a single frame of video to transfer from the camera to a processing location (on premise or in the cloud) and back to the display—wherever that display might be.

Latency is inherent in the broadcast chain due to the processing of video for distribution both internally and (eventually) to the public, but there are lots of other factors which add latency: encoding and decoding, compression; distance; and switch hops. In short, latency is everywhere across all infrastructures whether that is audio, video, IP or copper…the issue is more about being aware of it and how to manage it.

The longer the signals have to travel, the greater the chance of latency creeping into the distribution path because they often must pass over several “hops” of video processing to reach their destinations in real time. Thus, when frame delay (latency) is introduced, it makes the job of the announcers, video switcher and replay operators, camera shaders and other technical crew—that are relying on a program monitor—much harder. Typically, audio is always ahead of the video, but they must be synchronized at the receive end or lip sync errors and other problems occur.

The Beast Within

“Latency is a beast that we all have to deal with, there’s no way around it,” said Marco Lopez, general manager of live production at Grass Valley. “There is inherent latency in the speed of light and light moves very quickly, but when signals are traveling over long distances, especially across continents, latency becomes a problem. Also, depending upon the complexity of the Telco routing in place, you may have different points along that path that add latency. Therefore, it’s an issue that system designers have to be aware of.”

So, when aggregated together, distance, the speed of light and those required hops create unwanted latency. Depending upon the operator position and the work that they are doing, anywhere between 200 milliseconds (ms) to 500 ms (approximately half of a second), has been deemed acceptable. If you go beyond those levels, it creates an uncomfortable experience or challenge for the operator. For example, most production switcher operators like to be below 200 ms. Others, maybe a replay or graphics operator, they can tolerate up to half a second.

Grass Valley’s Agile Media Processing Platform (AMPP) includes several underlying patents that directly address latency.

Grass Valley’s Agile Media Processing Platform (AMPP) includes several underlying patents that directly address latency.

Helping To Manage The Problem

TheBroadcastBridge.com spoke with several suppliers of the orchestration and compression software (and production hardware) technology used for remote production and all are keenly aware of the issue and have devised similar but slightly different ways of managing latency. Due to the unique nature of each production and where it is located, latency is not an easy problem to solve.

“Every signal that arrives from a distant source may be delayed,” Axel Kern, Senior Product Manager Control & Orchestration, at Lawo, said. “For production, it is less important to receive all signals as fast as possible, and more important to receive them timely aligned. Some signals may have travelled longer or have been differently processed. Consequently, these signals may be more delayed than others, which would affect the production negatively—imagine a cut to a camera signal, which is delayed by a second. Therefore, it is extremely important to ensure signal alignment prior to production, which is achieved by buffers in the RTP receivers.”

These buffers, he said, vary in size to cope with various incoming delays from a production’s multiple sources.

Most remote productions today use some type of dedicated fiber line seamlessly mixed with a cloud-based processing element to provide the most scalability and reach the network requires. Some have even leveraged the public Internet for some parts of the workflow—using proven methods like the Reliable Internet Stream Transfer (RIST) and other streaming protocols to ensure reliability. But since the internet can be unpredictable, many broadcasters feel most comfortable with their own direct connections, which help to reduce latency.

Blackmagic Design offers its line of ATEM live production switchers with built-in features to address latency.

“Our ATEM Mini line features adjustable audio delay on the analog inputs,” said Bob Caniglia, Director of Sales Operations, North America, at Blackmagic Design. “It’s also possible to “direct” switch input 1 to the HDMI output for low latency, and the ATEM Mini Extreme models support two low latency direct loops. This, along with adjustable audio delay, extends across [most of] our switchers.”

Private Broadcast Cluster Computing with Lawo’s V__matrix vm_dmv Distributed Multiviewer helps keep latency low.

Private Broadcast Cluster Computing with Lawo’s V__matrix vm_dmv Distributed Multiviewer helps keep latency low.

He said that each 12G-SDI input features an independent Teranex low-latency standards converter [as low as 67 ms] allowing inputs to be any combination of HD, Ultra HD and 8K formats depending on the switcher. In addition, the ATEM Constellation 8K switcher’s rear panel Ethernet connection supports the ATEM switcher protocol for fast low latency switching. All of Blackmagic Design’s ATEM control panels and the free ATEM Software Control use this protocol, allowing the panels to be plugged directly into the switcher, so there’s no need for a network switch. However, because it’s Ethernet, customers can plug into their network and control the ATEM remotely, regardless of location.

Breaking Down Latency Components

Grass Valley’s Agile Media Processing Platform (AMPP) includes several underlying patents that address latency directly. [It was used for the recent Super Bowl in Tampa Florida.] They are proprietary but use redundant paths and other bits of clever signal processing that helps to protect against (and minimize) frame delay. As part of the AMPP, the company has released multiple patents that address latency and timing within a remote production in a cloud environment.

“We’ve done a lot of work to break down latency into its multiple components,” said Lopez. “One of the patents is called Three-plane timing, with the planes being 1) I have the people and equipment at the venue; 2) I have the location where all of the computation (processing) and output is going to occur, and 3) I have an operator that could be at a completely different location from the event—they could be working from their home.

“You really have to take all of these different planes of timing into account when setting up a remote production workflow,” he said. “We have the secret sauce within AMPP that allows us to manage the inherent latency.”

All agree that adjusting for latency before it becomes a problem is clearly the best course of action. How high latency goes before it negatively affects the production crew depends upon the application at hand.

“There is one area where delay must be as low as possible, which is at the generation of the source signal,” said Kern. “Nothing is more disturbing, than a delay of a few milliseconds on the in-ear monitors of your talent, when he is listening to his own voice. To avoid this, you may not want to mix the monitor signals from abroad.”

“The workflow that is the most difficult to handle is one where you have the talent at the event, and the processing is being done remotely,” Grass Valley’s Lopez said. “It’s either in the cloud or at some long distance away from the talent. What you feed back to them is the output from the remote processing engine, wherever that might be. All of these round-trip paths will have a large amount of latency and it becomes a challenge for the talent.”

The way to solve latency in this case is to locate physical hardware on either end of the workflow. So, hardware at the main production facility and the physical hardware at the venue, used to mix the content on sight so the talent can see and react to the action in real time. This way, whatever is being done locally, the talent and crew can see it immediately. This does create some duplication of compute power, which can result in extra cost, but it solves the latency problem and that’s what’s most critical to solve.

One way Grass Valley manages latency is to give users the ability in its switcher panels to control multiple compute processes, whether they are on-premise hardware or in the cloud. This allows the switcher operator to be at a distance while the compute can be more local to where the talent and crew is.

“The best way to control latency is to make sure audio is plugged directly into the camera, as it reduces latency the most,” Blackmagic’s Caniglia said. “Any device in a workflow will create a delay – camera, switcher, display, streaming, etc. However, by having the audio follow the video, you can reduce the overall delay.”

He added that when considering using the Internet for remote production connectivity, physical distance does not affect latency.

“Going next door is the same as going around the world. However, if you’re taking cables and distance across the studio into consideration, as long as you re-clock, there won’t be a delay.”

In a remote environment, headless mixers like Calrec’s RP1 core help minimize IFB latency by keeping the IFB processing in the same location.

In a remote environment, headless mixers like Calrec’s RP1 core help minimize IFB latency by keeping the IFB processing in the same location.

Audio Intercom Issues

From an audio perspective, the biggest challenge is in-ear monitoring and the requirement for people to hear themselves on set in real time—it is essential for IFB monitoring to have a low-latency signal path.

“In a remote environment, headless mixers like Calrec Audio’s RP1 core, or in fact any core, minimize IFB latency by keeping the IFB processing in the same location,” said Kevin Emmott, Marketing Manager at Calrec. “Control latency can be more of an issue with remote surfaces working with remote cores, but control lag is more forgiving, and operators have learned to live with this.”

He said there are many ways to minimize latency in the signal path, and each one has a cumulative effect. Balancing related paths to have same latency is a good start. Calrec consoles have delay built in across input delay (256 legs of 2.73ms), path delay (all paths have 2.73ms) and output delay (256 legs of 2.73ms). For remote production and remote working, mixers like Calrec’s RP1 minimize IFB and monitoring delay by processing all the audio for IFB monitoring on-site.

“On an IP network, a lower packet time such as 125us will increase bandwidth (this is due to bigger packet overhead based on having to create more packets) but it will lower latency as it takes less time to packetize,” he said. “Switch hops increase latency and create more data buffering, so keeping these to a minimum helps. Distance obviously makes a big difference, so the backhaul links will also have a big effect: there will be a big difference in speed (and cost!) between dark fiber and copper links.”

Compression Adds Delay

When sending camera feeds, a large amount of bandwidth is required for the video content and there are multiple video streams between the camera and the base station. If industry standard wrappers such as SMPTE ST 2022-6 are employed, the video content can be easily extracted and different kinds of compression can be applied.

When JPEG 2000 or XS is used, typically the bandwidth requirements for visibly lossless signal transmission is only 10% of an uncompressed transmission, according to product engineers at Net Insight. An even higher compression rate can be applied if the IP bandwidth available requires it, but signal delay and timing might be increased. That’s because compression introduces latency to the signal transmission. First, the audio signals embedded into the camera transmission protocol need to be delayed the same amount as the video signals. Also, all of the other audio sources from the production site need to be synchronized with the camera audio and video signals.

For the camera shader working at the centralized control room, any latency of the signal on the shading monitors is a challenge. This is especially the case for exposure control, where latency needs to be kept as low as possible. The transmission latency of around 100 ms, as is typically generated by JPEG 2000 compression, is still manageable for most applications. But the additional network latency needs to be taken into account, which might increase the overall latency to 150 msec or even more. Applications such as the production of fast-action sports or productions under very demanding lighting conditions will not allow for acceptable camera shading with Live At-Home remote productions.

Things Are Improving

The good news is that with each new remote production, new lessons are being learned about how to manage latency in the most cost-effective way. Systems engineers are learning from real-world experience and getting better at it.

“The work that we started four years ago with the GV AMPP initiative has resulted in a lot of innovation and figuring out how to solve these tough problems.,” Lopez said, adding that network designers should limit as many stages of processing as possible. “We’ve gotten a lot better but there’s still room for improvement.”

One general rule of thumb: If you can produce your high-quality content in one location on premise, and then go up to the cloud only once for distribution, latency will be minimized significantly. When the video and audio arrive together, perfectly in sync, the workflow is seamless to the user and all is right within the remote production world.

You might also like...

KVM & Multiviewer Systems At NAB 2024

We take a look at what to expect in the world of KVM & Multiviewer systems at the 2024 NAB Show. Expect plenty of innovation in KVM over IP and systems that facilitate remote production, distributed teams and cloud integration.

NAB Show 2024 BEIT Sessions Part 2: New Broadcast Technologies

The most tightly focused and fresh technical information for TV engineers at the NAB Show will be analyzed, discussed, and explained during the four days of BEIT sessions. It’s the best opportunity on Earth to learn from and question i…

Standards: Part 6 - About The ISO 14496 – MPEG-4 Standard

This article describes the various parts of the MPEG-4 standard and discusses how it is much more than a video codec. MPEG-4 describes a sophisticated interactive multimedia platform for deployment on digital TV and the Internet.

The Big Guide To OTT: Part 9 - Quality Of Experience (QoE)

Part 9 of The Big Guide To OTT features a pair of in-depth articles which discuss how a data driven understanding of the consumer experience is vital and how poor quality streaming loses viewers.

Chris Brown Discusses The Themes Of The 2024 NAB Show

The Broadcast Bridge sat down with Chris Brown, executive vice president and managing director, NAB Global Connections and Events to discuss this year’s gathering April 13-17 (show floor open April 14-17) and how the industry looks to the show e…

2 of 15. See more