Audio For Broadcast: Routing & Asset Sharing

Getting audio sources to the right destinations is fundamental to broadcast production. From analog patch bays to SDI and onwards to the luxury of IP, device identification & routing interfaces have been a central part of daily life in broadcast.

Adapting to new, more efficient technologies exposes everyone to different ways of working and creates opportunities, and every development in television production has been driven by technology since the very first broadcast in the 1920s.

Broadcast has always been about staying one step ahead. The quest for greater efficiencies and more creativity is a strong motivator, as content producers strive to implement the most appropriate tools to create better content faster, and for less money.

In the last few decades the biggest step changes in broadcast have been fundamental in that they have been about what all that tech is built on, with attention shifting to the network fabric underpinning the transport of audio, video, communications, data and other assets.

Routing and asset sharing in a broadcast environment is the process of distributing media content to multiple teams. It is about how best to manage those signals as well as how to deliver them, and after decades of established network infrastructures it has undergone nothing short of a revolution. A step change. Transformative. 

Audio over IP (AoIP) has changed the landscape, and it has been a long time coming; broadcasters were embracing AoIP long before video caught up. It is more flexible, cheaper to install and is creating a new breed of television engineer; one with an IT degree.

In this section of The Broadcast Bridge’s introduction to Audio for Broadcast, we’re not talking about the routing of individual audio signals into the audio console, nor are we talking about routing comms to teams located around the facility.

We’re looking at the bigger picture; we’re looking at how it all hangs together.

Let’s Route

Broadcast routing techniques illustrate the same technological progress. Early networks were crude and physical, with point-to-point analog signals from the production floor routed to a central hub and physically routed by massive physical patchbays. They used miles of expensive and heavy copper cabling and were time-consuming to build and maintain.

Not to mention how much they weighed. Rigging an outside broadcast could be a real headache. Take golf; a typical championship golf course can cover up to 200 acres and up to 30km of heavy, copper cabling could require additional vehicles. Installation and derig times could be considerable, as could transportation costs.

As the industry moved away from copper and began switching to digital signals, the cables became capable of carrying multiple channels of audio. Time Division Multiplexed (TDM) signals became more commonplace and the Audio Engineering Society (AES) championed formats like AES3 (for stereo digital audio) and multi-channel digital audio (MADI).

MADI was proposed in 1988 by a like-minded group of manufacturers: Sony, AMS Neve, Mitsubishi and SSL, as you are asking. MADI is a serial data structure capable of transmitting up to 64 channels of digital audio on a single coaxial or fiber-optic cable, and was standardized as AES10 in 1991. The MADI standard continued to develop; a 2003 revision specified BNC connectors with coaxial cables to provide connectivity up to 50m, while ST1 connectors enabled distances up to 2km over fiber.

As digital broadcast infrastructures grew and the industry moved even further away from copper to coax and fiber, MADI enabled system engineers to design more efficient routing schedules and was especially beneficial in outside broadcast environments where it could be used to transport all kinds of audio, including comms.

It was very popular on the golf course.

Continuing Serial

Around the same time, Serial Digital Interface (SDI) emerged and had a number of similarities to MADI. It was also a digital technology which transported uncompressed signals, and it used the same cable infrastructure. But while MADI is primarily used for audio, SDI is primarily used for video.

Nevertheless, it carved a deep niche in broadcast networks. Its secret is that SDI embeds up to 16 channels of audio into the video signal, and it does so on a single coax cable. Like MADI, it has also moved with the times with adaptations for HD-SDI and bit-rate adoption which has developed to 12G to allow UHD signals to be carried on SDI networks.

SDI proved useful in that a broadcaster can embed multiple output signals in each stream, such as a 5.1 mix alongside its stereo downmix, or additional language tracks, as well as associated metadata.

Although both technologies are still widely used in many broadcast infrastructures today, they have different data formats, protocols and applications. Format interoperability didn’t arrive until IP came on the scene, and even then, it was a long time coming.

Using Packets

Audio was way ahead of video when it came to IP networks. In fact, the first non-synchronous broadcast audio transport dates back to the 1990’s.

ATM (Asynchronous Transfer Mode) was based on telecoms technology to transfer audio, video and data over distance in fixed packets. Unlike modern IP broadcast networks, ATM was a point-to-point technology which relied on end points to establish a connection, but it opened a door to the possibility of using packets for real-time data transfer.

AoIP

Modern IP broadcast infrastructures are all SMPTE 2110 compliant, which covers how both video and audio media streams are broken down into data packets on a network. SMPTE 2110-30 refers specifically to the audio data, and despite a few tweaks, is largely based on the AES67 standard.

Up until the industry settled on AES67 as an IP standard, there were a number of ways to transport IP audio around a network. All still exist and all are now fully AES67 compliant. LiveWire, Dante and Ravenna are all proprietary protocols which developed the promise of IP workflows operating on standard IT equipment. AES67 took the common elements of these technologies and defined a standardized way to implement those elements.

In addition, Audio Video Bridging (AVB) provides another IP connectivity option. An open-source networking technology, AVB is a great option for packetized audio networking. AVB operates on layer 2 of the networking model to provide low latency, deterministic connectivity which guarantees a quality of service, and it remains a fantastic option for many audio installations.

Nevertheless, it has some inherent drawbacks in a broadcast environment in that it is limited to a local area network, making it problematic when looking to share data with external facilities such as OB trucks or remote operation centers.

Power Is Nothing Without Control

While the big names (AES67 and SMPTE 2110) get all the press, there is another element of IP which broadcasters should be paying attention to. While the standardization of encoding, transportation and synchronization of is covered by these standards, things like discovery and management are not.

The ability to find a device on a network and manage the signals between them is essential; if transportation is interoperable, then discovery and management should be too. The Joint Task Force on Networked Media (JT-NM) is a consortium made up of representatives from the Advanced Media Workflow Association (AMWA), SMPTE, the EBU and Video Services Forum (VSF) who are defining those requirements. While none of these are standards, they are specifications which are broadly adopted by equipment manufacturers.

NMOS IS-04 is one of these, providing a way for a device to advertise what media streams it is outputting which reduces the amount of manual configuration required to set up stream connections. Meanwhile, NMOS IS-05 forms connections between devices which are advertising their streams.

The Path To Interoperability

The development of IP in broadcasting mirrors its development in our everyday lives. As consumers we are already up to our necks in IP. It’s on our phones, our home networks, heating systems, fridges, doorbells, toys and cars. IP makes our lives easier and we are utterly reliant on it.

Broadcast IP infrastructures offer similar benefits. Rather than limiting itself to point-to-point connectivity, everything on the network is a potential endpoint and the promise of interoperability between different equipment manufacturers is compelling. It promises scalable workflows, enables remote working, lowers installation costs, reduces cabling, and offers continuous development without the requirement to invest in new hardware.

Costs are reduced, channel counts are increased. Meanwhile, full network redundancy is easily achieved with the adoption of SMTPE 2022-7. Where traditional redundancy uses a redundant path to switch to in the event of an error, SMPTE 2022-7 shares multiple redundant packet streams to create a single reconstructed stream, which means if any packets are lost, the system picks them up from the other stream.

This protects against data loss by switching between two identical packet streams which can be sent over two completely separate paths.

Hybrid

Although many broadcasters still rely on older networks and IP may not be a primary focus, most of these fixed and remote broadcast networks will increasingly demonstrate some kind of hybrid IP connectivity. There is still a lot of crossover between IT and broadcast engineering, and broadcasters’ engineering support teams are adapting to cater for both disciplines.

Covid-19 accelerated IP adoption and while SDI is still the main video infrastructure in many broadcast environments, SMPTE 2110 is having a bigger impact on broadcast production workflows.

Supported by

You might also like...

Why AI Won’t Roll Out In Broadcasting As Quickly As You’d Think

We’ve all witnessed its phenomenal growth recently. The question is: how do we manage the process of adopting and adjusting to AI in the broadcasting industry? This article is more about our approach than specific examples of AI integration;…

Designing IP Broadcast Systems: Integrating Cloud Infrastructure

Connecting on-prem broadcast infrastructures to the public cloud leads to a hybrid system which requires reliable secure high value media exchange and delivery.

Video Quality: Part 1 - Video Quality Faces New Challenges In Generative AI Era

In this first in a new series about Video Quality, we look at how the continuing proliferation of User Generated Content has brought new challenges for video quality assurance, with AI in turn helping address some of them. But new…

Minimizing OTT Churn Rates Through Viewer Engagement

A D2C streaming service requires an understanding of satisfaction with the service – the quality of it, the ease of use, the style of use – which requires the right technology and a focused information-gathering approach.

Production Control Room Tools At NAB 2024

As we approach the 2024 NAB Show we discuss the increasing demands placed on production control rooms and their crew, and the technologies coming to market in this key area of live broadcast production.