The Future Of Live Production Is Moving To The Cloud

With mature, cloud-based services now prevalent across the industry, helping to process and distribute content faster and more accurately than ever before, the long sought-after promise of producing content in the cloud—reducing cost and physical barriers—prompted broadcasters and production companies to experiment with new ways to make it a common reality.

This is being driven by a need to generate ever more content and accomplished using a suite of microservices (a technology stack) that automatically process a signal based on user-selectable infrastructure and production-type attributes.

However, several technical challenges remain to be resolved to ensure a reliable telecast in the style viewers have become accustomed to, especially for live sports. The main one is latency. The time it takes to send live camera signals up to a cloud, process it, interweave different elements into a multi-camera production and then distribute it to multiple platforms can be in the hundreds of milliseconds.

Microservices Make It Happen
Grass Valley’s Agile Media Processing Platform (AMPP), a collection of many small video processing microservices and applications that can be used in a wide variety of different workflows and can be spun up and down as required in a matter of seconds.

The company said this gives customers huge flexibility using generic compute and the ability to only pay for what they need at any point in time.

One example of multi-vendor cooperation, Dejero, Microsoft Azure, Avid, Haivision, Hiscale, Make.TV, and Signiant have joined forces to demonstrate live ingest and editing in the cloud.

One example of multi-vendor cooperation, Dejero, Microsoft Azure, Avid, Haivision, Hiscale, Make.TV, and Signiant have joined forces to demonstrate live ingest and editing in the cloud.

“The first and most important thing for Grass Valley is that cloud-based production doesn’t simply mean taking existing products and running them on virtual machines,” said Ian Fletcher Chief Application Designer at Grass Valley. “We have spent years building a complete cloud native secure SaaS ecosystem underpinned by key technology pieces that give our customers a huge amount of flexibility and agility. The two most important aspects of the technology stack are handling timing and latency, as getting both of these elements right is critical to real time live production.”

The advantages of producing in the cloud also means technical talent can be stretched in different ways, allowing the most sought-after directors, TDs, graphics operators, producers and others on the staff to work even when they are not in the physical location where a live sports event is taking place.

Eliminating Geographical Barriers
“A hugely important impact of cloud technology is the elimination of geographical barriers,” said Mike Burk, General Manager, LTN Create (a division of LTN Global). “By using cloud-based workflows, the pool of talented individuals who can contribute to a production environment has expanded. While unlocking access to highly skilled operators across distributed locations, you can also enhance efficiencies. Now, an individual operator can work on multiple shows per day, from one central hub—be that a state-of-the-art production facility or just at home.”

LTN offers a suite of complementary capabilities that enables the seamless orchestration of cloud-based workflows—not limited to a handful of products or services.

On the production side, LTN Flex allows customers to leverage remote workflows and centralized resources and expertise at its Kansas City production facility. They also have LTN Live Video Cloud, a scalable cloud-based routing system that enables customers to acquire, aggregate, and distribute any number of live content feeds and seamlessly integrate them into a show.

“With our suite of cloud-enabled solutions and services, we’re able to deliver cloud-based production, cloud-based master control, or channel playout,” said Burk. “Customers have different requirements and demands, and to align with that we can harness cloud technology for as much or as little of the workflow as needed.”

The Vizrt Control Production Suite includes a suite of virtualized products and services.

The Vizrt Control Production Suite includes a suite of virtualized products and services.

Uploading Sources To The Cloud
Another challenge is “last mile connectivity”, which is about the connectivity of signals and control from the venue to the cloud and back again. Data connectivity is generally reliable between data centers, but that last mile connection—to a remote location or into a suburban home—is always the weakest link.

Vizrt offers its own live production solution that helps users deploy an end-to-end live production workflow from source to delivery, in the cloud. The Vizrt Control Production Suite is made up of the company’s IP-based switcher Viz Vectar, production automation system Viz Mosart and real-time graphics products Viz Trio and Viz Engine. The package is now being used in hundreds of thousands of hours of live production. This, they said, solves the “getting sources into the cloud” problem.

This solution offers cloud-based production for everything from traditional programming to live event streaming. It is easily deployable to AWS, Google Cloud, Azure or a private cloud, and is offered via flexible pricing plans that give broadcasters the opportunity to increase production capability according to need.

“The challenge most people face is the difficulty of interconnecting and bringing sources to the cloud,” said André Torsvik, Head of Marketing Strategy at Vizrt Group. “Vizrt Live Production is built to natively leverage NDI 5, which supports video, audio and metadata transmission over IP with near-zero latency and can be used across any distance on LAN, WAN and over public internet.”

He added that latency is still a concern for many and that there are multiple ways to solve it—like delaying the audio to follow the video. Or reducing the data rate of the transmission pipe using compression.

“While NDI itself adds near-zero latency, there are many factors that impact this in the entirety of production in the cloud, many of which are outside the sphere of influence of the system vendors.”

Remote Audio Production
Henry Goodman, Director of Product Development at Calrec Audio said that his company has worked with a lot of customers over the last 12 months on remote mixing, and there are definitely no one-size-fits-all solutions to mixing in the cloud in this way.

There are workflows over IP, using hardware, using software-only, using different products, going through Calrec Assist (a browser-based interface for setting up projects), having Assist as a backup, using multiple control surfaces, etc., he said. What everyone needs most is a lot of flexibility in how they set up a software-centric infrastructure.

Calrec Assist is a software GUI that helps users mix audio and adjust console levels in the cloud.

Calrec Assist is a software GUI that helps users mix audio and adjust console levels in the cloud.

“We’ve definitely seen a shift to more remote control of audio signals, and mixing, in the cloud,” said Goodman. “Although for our customers the processing is still on dedicated hardware, there has been a clear shift to exploit more distributed workflows.”

Calrec IP products like the Type R and the ImPulse core allow for multiple mix environments to operate from a single core, whereby those mix environments are remotely located. They can be remote controlled by a physical control surface or on a screen through a dedicated web interface like Calrec Assist.

“These products also have flexible licensed DSP packs that can be switched remotely and instantly, and remote surfaces can be expanded with cat5 cables over POE,” he said. “When it comes to cloud processing, providing audio mixing at scale is still a challenge, but advances in remote control and more flexible remote workflows mean there are no issues around the geography of mixing Calrec consoles in the cloud.”

And with remote audio production in the cloud, Goodman said latency is always an issue. However, in most remote production workflows it is an issue that operators are learning to live with, and manufacturers are learning to combat with the introduction of technologies like the RP1, a dedicated Remote Production core. These technologies place processing facilities on the edge of a network to create local IFB mixes that can be mixed remotely to mitigate latency for in-ear monitoring.

The Future Is Cloud-Based
It’s clear that as more and more manufacturers and broadcasters adapt to virtualized working environments and embrace flexible workflows, many live programs are being mixed remotely or using virtualized surfaces. There is now huge interest in cloud products, and software as a service (SaaS) in general, and there is no denying that the cloud will play a role in the future of live audio production. In theory, these models are flexible and cost effective, simplify remote working and bring the industry closer to pure distributed production models.

“Flexibility and scalability, and especially the potential to match your production muscle to changing needs will be the key underpinnings of production setups in the future,” said Vizrt’s Torsvik, “Add in the possibility to work from anywhere, with a distributed team contributing to the production, and the possibility of location-independent disaster recovery, and it is hard to imagine how cloud will not play a key role at least in part of the setups of broadcasters going forward.

“However, it is important to note that this should be a business choice and not enforced by the vendor; Vizrt technology enables customers to deploy where it makes sense for them, not where they are forced to. It also helps them to make the move to cloud when it makes sense for them – and we recognize that for many customers this will mean some sort of hybrid implementation for some time to come.”

Everyone we spoke with agreed that there is no one size fits all strategy. The project at hand should dictate the technology application, not the other way around.

“Cloud-based workflows will be the future of live production, although the transition to the cloud is no one-size-fits-all task,” LTN’s Burk said. “Customers have varying requirements and demands, and many live events will still require a degree of localized production workflows. Cloud is the future, but at least for some time, we’ll see a hybrid environment.”

“While we have seen many live productions turn to the cloud due to the current crisis, there is no question that many customers are seeing the long-term benefits of this approach and will continue to operate like this for the foreseeable future,” said Grass Valley’s Fletcher. “However, that doesn’t mean it makes sense for all use cases. If, for example, switching a large number of sources at a remote venue is still a requirement, then it’s going to be more efficient and cost-effective to do that at the edge rather than bring all of those streams into the cloud.

“To survive in a continuously challenging market,” he added “the word we hear the most from customers is ‘agility.’ This is where a well-architected cloud native solution can absolutely deliver the benefits.”

Broadcast Bridge Survey

You might also like...

Standards: Part 11 - Streaming Video & Audio Over IP Networks

Streaming services deliver content to the end-users via an IP network connection. The transport process is similar to broadcasting and shares some of the same technologies but there are some unique caveats.

Designing IP Broadcast Systems: Routing

IP networks are wonderfully flexible, but this flexibility can be the cause of much frustration, especially when broadcasters must decide on a network topology.

Audio For Broadcast: Cloud Based Audio

With several industry leading audio vendors demonstrating milestone product releases based on new technology at the 2024 NAB Show, the evolution of cloud-based audio took a significant step forward. In light of these developments the article below replaces previously published content…

Future Technologies: New Hardware Paradigms

As we continue our series of articles considering technologies of the near future and how they might transform how we think about broadcast, we consider the potential processing paradigm shift offered by GPU based processing.

Standards: Part 10 - Embedding And Multiplexing Streams

Audio visual content is constructed with several different media types. Simplest of all would be a single video and audio stream synchronized together. Additional complexity is commonplace. This requires careful synchronization with accurate timing control.