When asked what “good sound” means to them, each audio engineer will give you their take on what really counts.
Unsurprisingly, opinions tend to differ quite a bit as there is no one-size-fits-all. And this is precisely what makes it so interesting to listen to mixes by respected audio engineers with different aesthetics and angles.
While they all agree that a mix should first and foremost serve the song and sound like all sonic elements it contains are there for a reason, the way to get there tends to vary. Obviously, studio mixes can be tweaked down to the last dB of EQ correction and revisited in case this change does not produce the intended result. A live context, on the other hand, mainly focuses on sound reinforcement that requires a slightly different approach and gives you far less time to perfect your mix.
Also, any tweak you make has an immediate impact on the sound the audience hears. Unless, of course, the desk has a function called “Listen Sense” (or something similar) that allows operators to adjust the channel parameters “offline”, using headphones in PFL mode, without the audience noticing what is going on behind the scenes. Once the sound engineer is happy with the result, they can apply the changes to the live mix the audience hears.
But we are digressing here. It turns out that it is much easier to agree on what a good mix should *not* contain: the low end should not be boomy, the high end must not be ear-piercing, the sonic image should not be “hollow”, and all parts need to sit well in the mix without acrobatic gain riding that may ruin the overall balance in a variety of respects. Also, the frequency spectrum needs to be adjusted in such a way as to avoid that the venue resonates back at you in an unpleasant way. This is far from easy, because an empty venue does not behave in the same way when it is packed.
All of the above require signal processing: using EQs to focus on the important frequencies of each signal source, dynamics to ensure that all signals remain audible without dominating the sonic image when they shouldn’t, panning to improve signal separation and widen the sound image, and effects for sweetening. A pinch of punch may also be highly welcome.
What furthermore counts for experienced sound engineers is a console’s basic sound, i.e., the one with no processing at all. What happens when you throw all relevant faders open at sound check? Is the sound muddy, or muffled, at first? Do you automatically reach for the EQ and boost the 2~4kHz range, because you know from experience that this is the only way to get a relatively “natural” sound?
Some consoles on the market already “sound great” right off the bat, allowing you to focus on finetuning critical aspects within seconds rather than minutes. This would be something to look out for before deciding on a new desk.
Another aspect is dynamics: what is on offer and, more importantly, how does it (alter the) sound? Should the compressor be almost inaudible at high ratio settings? Can you get it to not pump out of control? To what extent does the limiter color the sound, which may require additional EQ’ing? How accurately does the noise gate respond? And what about the expander, the de-esser and the possibility to easily implement parallel compression?
As always, quality comes at a price, which is usually worth paying, because the console has a service life of at least ten years. Besides, a mixing desk worth its salt offers a host of other features that are equally indispensable. Even though some audio engineers have taken to mixing even live events in the box, i.e., on a computer, a physical mixing console cannot easily be replaced with an inexpensive fader panel that can be purchased for a song. The ability to perform certain tweaks simultaneously in different sections (VCA grouping, bussing, dynamics, EQ, panning, etc.) is still an exclusive feature of “real” consoles.
There is a third way where a console manufacturer decides not to include any modulation, reverb, delay, etc., effects at all, because what sounds perfect to one audio engineer may not be what the next favors. Leaving room in the user interface for convenient, touchscreen-based, sweet-spot control of whichever plug-ins an audio engineer decides to use stands the best chance of satisfying all users. This approach furthermore has a positive effect on the desk’s purchase price.
Signal processing is too delicate a matter to skimp on bread-and-butter signal processing features every operator immediately falls in love with, or simply expects. Let’s therefore get the basics spot-on first. Allowing users to choose their preferred brand of effects plug-ins—and to change their minds without having to replace the console—looks like a very mindful and sustainable approach. For only you know what good sound means to you.
Divide And Rule
Much of the above also applies to the broadcast world, of course. The workflow is slightly different, however, because standards-based SMPTE ST2110 IP networking plays a much bigger part. How so?
First of all, a broadcast operation often has several consoles for as many audio control rooms. While for global events and other high-profile productions the channel count occasionally goes wild, with in excess of 200 sources that need to be mixed, most projects involve between 30 and 128 channels. Lawo’s A__UHD Core, the DSP engine for the mc² console family, accommodates up to 1024 fully-featured DSP channels, which may look like overkill.
Yet, users do not need to activate all channels the engine can muster. A flexible licensing system allows them to start out with 256 channels and then add more slices of 256 as their needs evolve. The underlying philosophy of this licensing system furthermore sparked a second approach: one A__UHD Core can be shared by 4, 8, 16 or even 32 mixers. Not all of them need to be mixing consoles, by the way. It is perfectly possible to control one or several “slices”, each of which has its own routing matrix and mixing console peripherals and is operationally completely independent, from a software UI (a so-called headless mixer) or using Ember+ commands. In such a setting, 1U of processing power is enough for up to 32 audio and/or master control rooms, which is good news for your energy consumption and rack space requirements.
IP networking also allows for workflows where one or several consoles are in different parts of the world and still access a single A__UHD Core, which can sit in yet another location. This is frequently used in live settings, often involving 5.1.4 Dolby Atmos immersive audio productions. The console is operated in city A and controls the DSP engine in city B anywhere in the world.
Another consideration for consoles—whether used for live performances or in broadcast—is how much can be automated, and how. AutoMix is most helpful for situations where several speakers debate on-set or co-host a show. Keeping their levels in check and allowing prioritizing one over all others makes the life of a sound engineer much easier. A welcome feature of this functionality is that inactive microphones are muted or attenuated in such a way that the overall ambience remains natural.
AutoMix can be used for any signal, from mono and stereo to multiple immersive channels, to minimize background noise and crosstalk with reduced sound coloration. Truncated sentences and late fade-ins are things of the past, enabling the sound engineer to focus on overall balance and sound quality.
A Downmix function for the creation of stereo renditions from immersive mixes guarantees perfect conversions into amazingly authentic sound images using just a few parameters.
In combination with the KICK software, which has been mandatory in the Bundesliga for several years, and a compatible tracking system, Lawo’s AutoMix function is even able to create larger-than-life ball-kick, referee and moaning noises by mixing the signals of as many as 24 microphones placed around the pitch. With a view to a clean immersive audio result, this function needs to work in such a way as to keep the ambience level constant and phase-aligned to avoid nasty coloration, while always emphasizing the important noises on and off the pitch.
Almost all TV productions involve several cameras, and directors are known to request frequent cuts among them. How does the sound engineer cope with that, as the audio that goes with the footage is very likely contributed by different microphones assigned to other audio channels than the ones they had been working on until the cut? The answer is called Audio-follows-Video. It provides automated transitions and the perfect coupling of image and sound.
It is based on assigning each camera’s tally to an event that can be selected for one or more channels, with a total of 128 available events. Parameters such as Rise Time, On Time, Hold Time, Max Time and Fall Time can be used to set the processing envelope in order to create amazingly smooth and natural-sounding transitions from camera to camera.
It’s All About the Sound
You may have noticed that all the bells and whistles that make the audio workflow smoother in a busy broadcast production environment focus on achieving a natural sound with a constant ambience level free from phasing and unpleasant level jumps.
With a Lawo console, you are always firmly in control. And don’t be surprised if you are complimented on your sound.