Making Remote Mainstream:  Part 2 - Core Infrastructures

In part-1 of this three-part series we discussed the benefits of Remote Production and some of the advantages it provides over traditional outside broadcasts. In this part, we look at the core infrastructure and uncover the technology behind this revolution.

As discussed in part 1, the major benefit of Remote Production is that it gives program makers and broadcasters more choices to both potentially increase quality and quantity of programs.

Instead of thinking of Remote Operation in the static linear terms of outside broadcast, that is, point-to-point links and peak demand infrastructures, we should begin to think in terms of distributed dynamic systems.

Each of the program essence streams can be processed independently as ST-2110 has very cleverly re-created the underlying sample clock in software and within the stream itself. This further allows the essence streams to be processed on entirely different systems.

ST-2110 Opportunities

As ST-2110 has now freed us of hardware timing constraints, there are many new and interesting options open to the broadcaster. There are many other opportunities where this application model will become relevant freeing broadcasters to ask deep and challenging questions – why does the slo-mo operator need to be at the venue? Or, why can we not perform AI processing in the Cloud to tag live video and audio for ingest?

Figure 1 – ST2110 allows us to abstract away the video, audio, and metadata essence from the underlying data link and transport stream. One major advantage of this is that the essence feeds can all be processed independently of each other if required.

Figure 1 – ST2110 allows us to abstract away the video, audio, and metadata essence from the underlying data link and transport stream. One major advantage of this is that the essence feeds can all be processed independently of each other if required.

Another interesting scenario occurs when considering pool feeds. For international events, a host broadcaster will be nominated by the organizers to provide a program feed for their own broadcast and a clean-feed for international broadcasters. The international broadcasters will then in turn add their own graphics and commentary voice overs.

Streaming Flexibility

In the distributed Remote Production example, it’s perfectly possible to stream the clean-feed from the production hub. But with Remote Operation, there is also the option of streaming individual cameras and microphones to other broadcasters so they can make their own productions. This is particularly useful as broadcasters start to experiment with virtual reality (VR) and augmented reality (AR). 

Broadcasters may also work extensively with 3rd party production facilities as IP enables them to send uncompressed or compressed feeds to production facilities. The feeds can source directly from the venue or the production hub, so they don’t have to go through expensive video and audio specific circuits. Most, if not all production facilities will have IP compliant circuits into their buildings making integration to Remote Operation even easier.

Return Feeds

IP-compliant circuits allow the cameras to be backhauled to the production facility and the bi-directional ability of IP circuits allows return vision, monitor feeds and even teleprompter feeds to be sent back to the cameras.

A similar solution has the potential to occur with audio, that is microphone feeds can be sent to the production hub and return monitoring feeds can be sent back to the venue. However, latency plays a big part of making audio monitoring work reliably and to keep latency as low as possible, monitoring is provided on site.

There are occasions when the broadcaster may decide to place more of the infrastructure equipment at the venue instead of keeping it at the production hub. This might occur when the event is so big, that is there is so much revenue at stake, that the broadcaster may want to record everything on site as well as providing a full production feed, similar to how outside broadcasts work. Or, the Telco’s may not be able to provide sufficient capacity for an IP-compliant circuit, so backhauling is not possible.

The type and quality of circuit provided by the Telco’s has a significant influence on how the remote production will work. Three parameters influence the quality of an IP-compliant circuit, they are; data rate, data loss, and latency. The SLA (service level agreement) agreed with the Telco should specify all these, but it’s fair to say that a higher data rate, and lower loss and latency, results in a higher cost.

Latency is Inevitable

All data circuits have a certain amount of latency. The continuous video and audio data streams are packetized leading to the use of memory buffers both in the send and receive equipment, and the network too. This is an inevitable consequence of using IP and one that cannot be avoided. It’s not just a case of “if” we have latency, but “how much”. 

Figure 2 – Latency is inevitable in packet switched and asynchronous IP networks. IP packets entering a switch on ports P1, P2, and P3 are multiplexed and sent out to P10. Some of the packets are temporally shifted causing both network jitter and latency.

Figure 2 – Latency is inevitable in packet switched and asynchronous IP networks. IP packets entering a switch on ports P1, P2, and P3 are multiplexed and sent out to P10. Some of the packets are temporally shifted causing both network jitter and latency.

Leased circuits contracted by Telco’s will have the latency specified in the contract, so it becomes more predictable. However, if a broadcaster uses the public internet or even a shared service, then latency is not guaranteed or predictable. The good news is there are solutions that overcome this and there are vendors who can supply connectivity over the internet.

Latency also occurs in broadcast workflows, specifically when we compress video and audio or try and synchronize streams. A great deal of research has been conducted into video and audio compression in recent years. This is another area where broadcasters can benefit from the progress in other industries as much has been done to improve video compression in telecommunications, predominantly to reduce the amount of data needed to deliver a video and audio stream.

As video has a significantly higher data rate than audio, the effects of reducing the data bandwidth are noticed more in video compression. One method is to analyze movement over a period of frames, find the differences between them, and then only send the difference information. The assumption being that most temporally adjacent video frames will be similar. To achieve this form of efficient compression, many frames of video must be buffered, and it’s this buffering that significantly adds to the delay, or latency.

Last Mile Costs

Latency is a combination of factors in both the IP-compliant network as well as the broadcast workflow and the two are usually additive. As a general rule of thumb, the lower data rate and higher the data loss, then the longer the latency. Broadcasters don’t always have the choice of the data circuits available so have to compromise, but Remote Production still gives them many more choices.

For Telco’s, the “last mile” of cabling is often the most expensive for them to provide. Fiber cables will need to be installed in the road or through ducting, resulting in high installation costs. However, these are still relatively low compared to the costs of installing full bandwidth audio and video circuits.

Where the equipment resides is then a compromise between many factors. These are not only technical decisions but production and financial decisions too. For some productions it may be better to have commentators at pitch side. For others, it may not be possible or even desirable, especially if one commentator needs to cover two or three games. Being in a central production hub allows them to cover several games, otherwise they would have to travel from one venue to another, often on the same day.

The beauty of Remote Production is that the broadcaster has many more choices than they do with the traditional outside broadcast method. 

Part of a series supported by

You might also like...

NAB Show 2024 BEIT Sessions Part 2: New Broadcast Technologies

The most tightly focused and fresh technical information for TV engineers at the NAB Show will be analyzed, discussed, and explained during the four days of BEIT sessions. It’s the best opportunity on Earth to learn from and question i…

Standards: Part 6 - About The ISO 14496 – MPEG-4 Standard

This article describes the various parts of the MPEG-4 standard and discusses how it is much more than a video codec. MPEG-4 describes a sophisticated interactive multimedia platform for deployment on digital TV and the Internet.

The Big Guide To OTT: Part 9 - Quality Of Experience (QoE)

Part 9 of The Big Guide To OTT features a pair of in-depth articles which discuss how a data driven understanding of the consumer experience is vital and how poor quality streaming loses viewers.

Chris Brown Discusses The Themes Of The 2024 NAB Show

The Broadcast Bridge sat down with Chris Brown, executive vice president and managing director, NAB Global Connections and Events to discuss this year’s gathering April 13-17 (show floor open April 14-17) and how the industry looks to the show e…

Essential Guide: Next-Gen 5G Contribution

This Essential Guide explores the technology of 5G and its ongoing roll out. It discusses the technical reasons why 5G has become the new standard in roaming contribution, and explores the potential disruptive impact 5G and MEC could have on…