We continue our series on things to consider when designing broadcast audio systems with the beating heart of all things audio – the mixing console.
When most people picture an audio control room (ACR), they are most likely picturing the mixing console. It is the mixing console, with all its lights, knobs and fancy sliders, which delivers the wow moment. “Well, I wouldn’t know what to do with all those buttons,” people will say.
It’s the Instagram star and rightly so; not only has it got all the bells and whistles, but it literally manages all the bells and all the whistles.
Nevertheless, shifts in production workflows are changing this picture. More aspects of live production are being democratised by remote working and by the Cloud, and as more broadcasters embrace distributed workflows, more productions embrace the benefits of flexible working with confidence.
It is in this environment that the traditional broadcast mixing console needs to keep pace.
Broadcasters As Cartographers
Although broadcasters are already on this road, they are drawing the map as they go. Issues like control latency over Wide Area Networks are largely being managed and the mixing console has already been split into its constituent parts.
The control surface – what everyone thinks of as the mixer – can use on-premise DSP in a traditional way, or it can control a DSP engine somewhere else (such as a venue or another broadcast facility), or it can be a hybrid of both.
The control surface no longer needs to be in a studio; it could be anywhere. There’s no longer a requirement for a physical surface and control can be on a laptop connected to public internet or using automation to simplify some of the processes.
And while these workflows all take advantage of tapping into traditional processing engines (hardware which drives the control surface), the continued development and adoption of Cloud-based, scalable microservices will devolve these workflows even more.
Same Old, Same Old
All that said, the audio requirements of live production are unlikely to change.
As we covered in part one, the broadcast audio console – in whatever form - is the hub in the ACR, and that will still be true irrespective of whether it is physically central or whether it exists on the edges. Wherever the Mixing Operator is based, they will still have the same fundamental responsibilities.
All audio inputs and outputs will still need to be managed; complex and dedicated comms systems will still have to be arranged; all audio signals will still need to be processed appropriately and mixed together in an engaging way; multiple transmission formats will still need to be mixed; outputs will still need to be monitored for intelligibility and international compliance; latency will still have to be mitigated; and most importantly, the whole thing will still need to be guaranteed to stay on air, whatever the connectivity or geography.
So let’s break these down.
Fundamentally, the biggest aspect of any production is comms. Broadcast consoles are designed to keep people talking, with big routers and time-saving mix busses for internal communications, and this is even more important when the production team is split across multiple sites.
Even simple live productions, like a nightly news bulletin, will have a lot of people to keep in the loop, from talent, camera people and runners, to researchers, producers, directors, lighting engineers, guests and outside sources. The console is at the centre of this and feeds into the comms system, which adds a layer of talkback to keep everyone informed.
In addition, broadcast consoles simplify internal comms with mix-minus busses, which streamline routing and free up output busses. Mix-minus allows an interruptible foldback (IFB) mix to be sent to multiple destinations, minus each destination’s own contribution, at the touch of a button. The result is that each user can hear all of the other users but not their own microphone (which avoids confusion and potential feedback issues).
Broadcast consoles have plenty of multitrack mixes for everyone else so that people hear only what they need to do their job. They might also go into foldback speakers, or into earpieces for musical guests on a magazine show. All these mixes are time-consuming to create and are in addition to the on-air mixes which we all hear when we’re watching the show at home.
This massive routing capability is fundamental to a broadcast mixer, and is why one of the reasons why planning and preparation of the console for a live broadcast is so time consuming; the operator needs to be aware of who needs to talk to who and how to set those mixes up.
Now that everyone in the production is on the same page – thanks comms! - the mixing console turns its attention to managing all the live signals and processing them appropriately so that an audience has a clear narrative to follow.
This is more akin to a traditional mixing console – in fact, it’s what all mixing consoles are designed to do. There are lots of things which affect how incoming sources sound, from the acoustics of the venue to microphone choice and placement. Potential issues can always be mitigated as part of the planning process, but in live broadcast there are always going to be things which cannot be controlled.
Again, specialist broadcast consoles have features to help with this. For example, high input headroom is an important design principle to counter unexpected and unplanned peaks. Multiple insert points in the signal chain are necessary to introduce external processing, and being able to introduce it at multiple points - whether it’s pre or post-EQ, or pre or post-fader - will influence the end result, so these also need to be considered.
Increasingly, with more pressure on mix operators to mix to multiple output formats, broadcast consoles also need to have assistive features which can do some of the heavy lifting to allow operators to concentrate on the craft.
Autofader features - or audio-follows-video - allow faders to be opened and closed automatically through GPIO triggers and are well-served in environments where audio sources have to change alongside different camera shots. This is especially useful in fast-paced motorsports where trackside cameras need matching audio to tell the story.
Automix systems, which originate with Dan Dugan’s pioneering automatic mixing system launched in 1974, are often employed on a broadcast mixer to automatically mix the levels of a selection of microphone channels to keep the overall level of the mix constant and to ensure a consistent ambient/background noise level.
The Last Leg (And All The Other Legs)
In addition to managing all the comms, managing all the incoming signals and mixing it all together, the sound desk Operator also prepares the show for transmission.
Transmission outputs used to be simple, with a mono and stereo feed covering all the bases. The last 20 years has seen rapid development with 5.1 surround, immersive formats and individual audio objects all on the table. The uptake in consumer devices which support spatial audio, like Apple’s AirPod Pro and 3D capable soundbars, means that Next Generation Audio (NGA) content like immersive audio and personalisation is heaping more responsibility on the Operator; this is another area where broadcast consoles can help pick up the slack.
NGA is an umbrella term which covers technologies and ideas like immersive and object-based audio (OBA), a technique which encodes audio objects with accompanying metadata to describe what it is and how it contributes to a mix.
Audio objects can contribute to personalisation and accessibility features in consumer equipment and allow the contribution of certain objects to be modified by the viewer. While it’s still early days, real-time transport for this is possible through Serial ADM, a metadata format which can be used for live production.
Immersive sound on the other hand, is already gaining ground, with mixes for live sporting events adding more crowd ambience in the height channels and immersive stings being inserted to add height to replay graphics and in-game statistics.
Broadcast consoles can help with automatic upmixing to multi-channel formats, whether that is for live transmission, or for multitrack ingest into asset management systems for archiving or repurposing.
And of course, they all need to be powerful enough to cope with all the extra stems and be able to flex to cope with the demands of changing production schedules.
We haven’t touched on monitoring, but monitoring is fundamental. In a broadcast environment it is more that making sure it sounds good – it’s a legal requirement. Monitoring is all about guaranteeing quality of service to the consumer, but in many regions, there are also strict broadcast standards that content providers must adhere to.
Loudness meters provide a way to monitor and regulate average loudness levels over the duration of a program and are usually part of the meter bridge on a broadcast desk. With several international loudness standards, these need to be flexible to guarantee adherence to the law; the EBU (European Broadcasting Union) relates to Europe, ATSC (Advanced Television Systems Committee) applies to North America and there are ARIB (Association of Radio Industries and Businesses) regulations in Japan.
Monitoring can also apply to checking incoming signals are live – that your remote reporter can be heard and can hear cues from the studio - and that any upmixes and/or all downmixes are intelligible.
What’s The Delay?
Delay is an important consideration in a broadcast console, and this can be complicated further when production workflows are geographically distributed. Artificial delay is necessary to compensate for a variety of sync issues, such as video processing delays. Most broadcast consoles will have multiple points where delay can be inserted to bring things back into line.
As workflows become more geographically diverse, and incorporate more varied environments like on-premise, off-premise and cloud processing, sync will need to be continually assessed and regulated.
Ready For Anything
We often talk about the broadcast industry being in transition. In fact, it’s always been in transition.
The adoption of IP infrastructures and a renewed focus on remote working and virtualisation have changed how we look at broadcast infrastructures, but in essence the same challenges need to be met and the fundamentals of those workflows remain unchanged.
Control is still paramount, reliability and redundancy are still key components, and an Operator still needs to manage all the audio in a simple and ergonomic way, with access to every parameter and a straightforward way to adapt on the fly wherever the broadcast demands it.
However that is done and in whatever context it is, the mixing console will continue to be the technology which holds everything together.
It just might look at bit different on Instagram.
You might also like...
As the wider broadcast industry picks up the pace with virtualized, cloud-native production systems we take a look at what audio vendors currently have available and what may be on the horizon.
One cannot get very far with electricity without the topic of batteries arising. Broadcasters in particular have become heavily dependent on batteries to power portable equipment such as cameras and lights.
Information theory can also be applied to loudspeakers, which are among the most difficult of transducers to design. Measuring the information capacity of loudspeakers is a useful tool.
Here we look at some practical results of transform theory that show up in a large number of audio and visual applications.
We continue our series on Broadcast Audio Systems with a discussion about workflow with multi-award winner Robert Edwards. We look at general purpose workflows, and some considerations for different types of production across news, sports and chat shows. As the…