The Sponsors Perspective: Video And Audio Transportation

The concept of remote production – moving raw content generated at a site event back to the main facility for production and management – has been rapidly gaining popularity in the broadcast world.


This article was first published as part of Essential Guide: Flexible Contribution Over IP - download the complete Essential Guide HERE.

Mobile/site content has also been gaining wide popularity with an ability to offer viewers a range of sporting events and other on-site broadcasts via traditional over-the-air, cable, and streaming options.

In parallel to the expansion of consumer portals, the range of content formats in use is expanding with HDR 1080p and UHD contribution becoming common for major sporting events and even 8K production deployments either live or being trialed.

When COVID-19 hit, broadcasters were faced with a dual dilemma – their viewers wanted even more remotely generated content to make them feel more connected while they had to stay at home, but the broadcasters needed to keep their own personnel safe while generating that increase in content that consumers demanded.

Personnel Safety drove two key technology shifts: First, a rapid acceleration of remote production deployments. The 2021 Olympics in Tokyo was a solid case in point. Broadcasters sent 39% fewer people than would be usual for such an event, with much of the production being handled back at their primary facilities or via a remote production company, who in turn had to operate at site with reduced personnel. Sending fewer people not only kept broadcast personnel safe; the broadcasters immediately noted a dramatic reduction in costs as the amount of on-site equipment plummeted and the need for transportation, housing, and other site expenses went down as well. Increasing the volume of on-site broadcasts for viewers also became more viable because of the ability to centralize production, which allows the use of the same production crew to handle multiple events on the same day.

The second key shift was that the facilities themselves were staffed at much lower levels, thanks to the ability to utilize domestic internet connections, compression technologies and secure remote access tools to enable staff to effectively perform operations and engineering functions from home. These ‘at home’ workflows required a shift away from the compression technologies in use over managed networks towards lower bitrate codecs, protected by ARQ mechanisms such as SRT, Zixi and RIST that enable content to tolerate the packet losses expected over domestic internet connections as well as provide toolsets for encryption and traversal of firewalls within minimal configuration.

While remote production deployments continue to grow, many broadcasters are still puzzling over their options because it isn’t as simple as many would have them believe. Everything depends on the telecommunications infrastructure that will be in play for each event. Traditional sports arenas are no problem: there will be dedicated fiber links with all the high-speed bandwidth you need. But what about the non-traditional locations? The problem is that not all sites are created equal: some still have highly reliable infrastructure, such as a pro team sports stadium with dedicated fiber, while others offer different types of connections that may differ wildly in the level of bandwidth and amount of equipment that can be connected at once. No one wants to plug their video and audio feeds into a shaky internet connection and end up with an unusable product (and possible penalties for not providing the contracted coverage requirements). Obviously, a bit of homework is required before planning a remote event to determine how the feeds and backhaul will be handled.

For venues and facilities with access to high bandwidth, high reliability IP connectivity, content contribution can leverage codecs that offer extremely low latency performance, such as JPEG XS, which has seen rapid adoption recently. JPEG XS encode and decode latency, can be so low that a full return path workflow between two sites can introduce less than 1 frame of delay compared to completely uncompressed delivery, making it ideal for productions with talent in multiple geographical locations as conversation flow is more natural, or as a tool for achieving the lowest latency ‘glass to glass’ workflows.

There are still considerations beyond codec and bitrate selection. JPEG XS compressed video can currently be carried either in a SMPTE ST2110 workflow encapsulated as SMPTE 2110-22, where video, audio and ancillary data are carried as separate essence flows, specified as VSF-TR08, or within an MPEG transport stream, where the essences are multiplexed into a single flow, specified as VSF-TR-07.

The essence-based nature of SMPTE ST2110 has desirable benefits in production workflows. However, it can be complex to handle and monitor and generally requires specialized equipment at both the send and receive site in terms of PTP to provide synchronization of the essence flows, usually from a GNSS locked grandmaster clock and PTP aware switch fabrics. This can present challenges such as antenna positioning with line of sight to satellites in buildings or underground locations. In locations with SDI hand off, provisioning the IP infrastructure may be cost / space prohibitive, especially for small flyaway packs and, as such, may drive a technology decision to utilize the TR-07 transport stream-based encapsulation method. A thoughtfully designed encoder and decoder implementation can be adapted to uphold the low latency characteristics of the JPEG XS codec, enabling TS to be used without latency penalty over ST2110.

SMPTE ST-2110-based deployments popularly utilize SMPTE ST2022-7 diverse path redundancy to protect against packet loss and path failure within the IP fabric. This also needs planning (and testing) to consider the effect of total loss of one path. Can the alternative path deliver the content with zero packet loss, or do we need to apply a degree of FEC to protect against low levels of loss in this scenario?

If diverse paths are not available, a single ended workflow may mandate use of a low latency packet loss protection mechanism such as FEC. While there is a bandwidth overhead to be considered, this doesn’t appreciably increase latency in the manner that ARQ-based mechanisms can.

Is the underlying network capable of utilizing multicast for delivery to multiple endpoints from a single encoder? If unicast delivery is required, can the encoder deliver multiple unicast instances of the encoded signal without the need for external NAT or replication services?

Additionally, cloud platforms are growing in popularity for live production workflows, initially driven by necessity during the pandemic and now maturing with software toolsets aligning with capabilities of on prem hardware-based solutions. Contribution of linear video into cloud workflows has traditionally utilized lower bitrate codecs such as AVC, HEVC or NDI augmented by ARQ over public internet. However, increased availability and reduced cost of high bandwidth circuits from venues and broadcast facilities now means utilizing ultra-low latency codecs such as JPEG XS as a method of ground to cloud contribution is also possible.

If reliable bandwidth is available but is constrained, more complex codecs such as AVC or HEVC would enable more content to be carried at lower bitrates while maintaining very high levels of visual quality. The trade off is latency, with widely supported low latency AVC / HEVC workflows adding approximately 700ms of delay to an encode / decode path compared to an uncompressed or JPEG XS workflow.

Even lower latency can be achieved with ultra-low latency implementations of codecs such as HEVC ULL. This approach does not produce a traditional GOP with I/IDR frames, P and B frames. No complete Intra frames are used. Instead, the encoder uses GDR (Gradual Decoder Refresh) as opposed to IDR (Instantaneous Decoder Refresh). This technique is often referred to as stripe refresh. While this approach enables workflows with end-to-end latency of less than 200ms, it also does not leverage the efficiency of GOP-based encoding and, as such, needs to run at higher bitrates compared to traditional AVC or HEVC encoders. Still, it consumes dramatically less bandwidth compared to JPEG XS- or JPEG 2000-based systems.

There is no defined standard yet for HEVC ULL to enable cross vendor interoperability, so for now at least, the same vendor encoder and decoder is required.

Even when reliable and managed bandwidth is assured, contribution functions utilizing public internet can make for a cost-effective continuity strategy and therefore can be utilized to supplement managed deployments.

Many managed contribution networks are dedicated to the purpose of media carriage and, as such, tend not to utilize content protection, favoring the most efficient use of bandwidth and lowest latency achievable. There are however numerous use cases where content traverses a private but mixed-use IP fabric, such as a corporate IT backbone. It may be desirable or even mandatory in this circumstance to apply encryption to the content to prevent possible interception. There are numerous techniques available to achieve this from simple passphrase protected encryption to RSA encrypted session keys for each receiver.

Multiple types of signal hand-off are required at the compression edge. Many facilities and trucks now utilize an uncompressed ST2110 IP routing core and NMOS control layer, while others utilize SDI. Often the contribution solution needs to service both forms of hand-off within the same workflow depending on what is available at a given location, requiring a contribution solution with flexible I/O encompassing both electrical and optical SDI and IP connectivity capable of supporting uncompressed UHD I/O. Other workflows may not decode a compressed signal back to baseband and, as such, require tools to transcode content and perform processing in the compressed domain, police media flows, provide NAT and multicast / unicast conversion capabilities.

Before planning an on-site or cloud production workflow, ask what types of connections are available and what sort of bandwidth can be expected from each. If it’s IP-based, find out whether backups exist in case the main feeds fail. Another critical step is to determine the level of security on those connections. If they’re visible to hackers, they can be easily disrupted.

One piece of equipment that can help mitigate your bandwidth issues is a good-quality compression platform, which provides low-latency compression and decompression functions to make video easier to transport over IP. Some remote video equipment includes cursory encoders, but those encoders may not be up to the task when presented with varying IP speeds or other bandwidth issues, and they may not have a comfortable level of physical IP security or content encryption capability. Stand-alone encoders, while admittedly increasing the amount of equipment going to the on-site location, are usually the best choice for equipment connection, IP connection, and high levels of security and content protection tools.

When selecting an encoding platform, make sure it has the versatility needed to handle the available bitrate, video resolution capability, capacity and interfacing needed for the available connection. A good platform will offer standards interoperable format support, control APIs, encryption solutions, physical and content redundancy models, and should also provide robust firewall and traffic policing capabilities. Appear has made all of these needs a “must” in our solution portfolio.

Supported by

You might also like...

Production Control Room Tools At NAB 2024

As we approach the 2024 NAB Show we discuss the increasing demands placed on production control rooms and their crew, and the technologies coming to market in this key area of live broadcast production.

Designing IP Broadcast Systems: Where Broadcast Meets IT

Broadcast and IT engineers have historically approached their professions from two different places, but as technology is more reliable, they are moving closer.

Network Orchestration And Monitoring At NAB 2024

Sophisticated IP infrastructure requires software layers to facilitate network & infrastructure planning, orchestration, and monitoring and there will be plenty in this area to see at the 2024 NAB Show.

Audio At NAB 2024

The 2024 NAB Show will see the big names in audio production embrace and help to drive forward the next generation of software centric distributed production workflows and join the ‘cloud’ revolution. Exciting times for broadcast audio.

Encoding & Transport For Remote Contribution At NAB 2024

As broadcasters embrace remote production workflows the technology required to compress, encode and reliably transport streams from the venue to the network operation center or the cloud become key, and there will be plenty of new developments and sources of…