Audio For Broadcast: Equalizers (EQ)
EQ is one of the central tools of the audio production process and with a modest amount of knowledge and practice, a little can go a very long way to improving the subjective quality of a broadcast.
All 16 articles in this series are now available in our free 78 page eBook ‘Audio For Broadcast’ – download it HERE.
All articles are also available individually:
In addition to everything else a sound operator has to deal with, critical listening is a constant. Critical listening is not just about knowing when an audio source sounds bad – we can all do that – but is about identifying why it sounds bad, which part of the signal is causing the problem and how it can be fixed.
Equalisation (EQ) is one of the tools that sound engineers use to help fix these issues, and while modern mixing equipment can show where a signal is misbehaving with beautifully precise EQ curves and detailed visual representations, most audio operators don’t mix with their eyes. That’s why they have ears.
As we have seen , dynamics help keep signals confined within a dynamic range, but EQ can enhance those signals to compensate for external factors such as external noise and challenging room acoustics, and to provide more context to the output for specific content. In essence, it corrects nasty artefacts and aesthetically shapes the audio to the broadcast output.
Used in combination with dynamics, these two tools can fix the vast majority of challenging input sources. In the good old analogue days, an input signal would work its way down a channel strip through EQ and into dynamics. These days, modern digital consoles or workstations provide lots of ways to combine EQ and dynamics, with plug-ins or external hardware inserts providing even more options.
Unsurprisingly, changes to EQ will affect a signal’s dynamics and vice versa, and there is a great deal of spirited debate over which should come first (spoiler: both sides are correct).
But to understand how they affect each other, let us start with what EQ is, and how it is applied.
Everybody Hz
Frequency is measured in Hertz (Hz). As adults, every sound we hear sits on a frequency range between 20Hz (low) to 20kHz (high).
EQ is used to adjust the character and tone of a sound by boosting or attenuating (reducing) specific frequencies in this range. Expensive-looking hi-fi’s in the 1980’s often had graphic equalisers which did this. Like supercharged bass and treble controls but with more slices. Graphic equalisers provide a series of controls (usually sliders) with each one affecting a narrow band of the frequency spectrum. The design is intuitive, controls on the left affect low frequencies and they work their way over to those on the right controlling high frequencies.
These controls don’t add anything to the actual signal – they simply allow a listener to change the emphasis of frequency ranges within what already exists. Broadcast equipment goes beyond what a graphic equaliser can achieve by throwing even more controls at it. With a parametric EQ (PEQ) instead of having one control per band we are given a small set of adjustable controls.
The frequency control allows us to select the centre of the frequency band to be adjusted - it can usually be set anywhere between 10Hz to 20kHz.
The gain control determines the amount that the frequency will be boosted or attenuated. This is measured in decibels (db).
Because these adjustments will also affect some of the frequencies either side of the chosen value, the bandwidth (also referred to as “Q” for quality) control determines how wide this band will be. Rather than measuring this as a frequency range, the Q setting is measured in octaves; this is because the frequency range we hear is logarithmic rather than linear, so moving the bandwidth up and down the frequency range as an octave measurement adjusts the filter logarithmically and maintains the musicality of the filter across the whole spectrum.
Using a lower Q setting makes the band wider, influencing a wider range of frequencies either side of the centre value and will produce a more natural, subtle sound. A higher Q setting produces a narrower band, which influences a much narrower range of frequencies either side of the centre value.
Ding Dong
Even though sound engineers use their ears to mix, there are some fabulously descriptive names to help operators visualise what is going on. The EQ filter we have been describing is known as a bell curve, because it looks exactly like a bell. How lucky is that?
With a very narrow Q setting, the bell becomes very slender to become a notch filter. A notch filter affects fewer of its surrounding frequencies, and although this sounds less natural it is incredibly useful to sound operators.
Notch filters are used to remove noises which occur at a particular frequency. When combined with a high attenuation, its narrow bandwidth can pinpoint and entirely remove problematic frequencies while having minimal effect on surrounding frequencies. A popular use of a notch filter is to remove microphone feedback, or to filter out noise from a mains supply at around 50Hz/60Hz. They can also be used to reduce sibilance within vocals on those troublesome S’s which tend to occur around 5kHz to 8kHz.
High Pass
As well as controlling troublesome frequencies, EQ is also used to make sonic improvements to a signal. The explosion in podcasting and the affordability of consumer recording technology means that the internet is at bursting point with tutorials on how to EQ the voice, and vocals can be hugely improved with very minor adjustments.
A high pass filter is used to reduce or remove low frequency noises on a signal, and is often applied to vocal tracks to simply remove information which is not required by attenuating low frequencies while allowing high frequencies to pass though. As the frequency range of a human voice starts at around 100Hz, setting a filter at 100Hz can remove a lot of unnecessary low-end noise such as air conditioning machines, traffic and generators.
At the top end of the scale a low pass filter attenuates high frequencies while allowing low frequencies to pass through.
In music production an engineer may keep more of the high end to provide some brightness, but in a broadcast environment, with a reporter on location, it is often unnecessary and the application of a low pass filter can clean up the signal by once again removing information which is not adding anything to the broadcast.
For both pass filters, the Q setting switches to a slope to determine the cutoff rate. In the same way as a bell curve, the gradient is measured in dB (for gain) and in octaves (for frequency).
Enjoy Your Shelf
A shelf is similar to a pass filter, although shelves can also be used to boost a frequency signal as well as reduce it. A low shelf will affect frequencies below the curve at the lower end of the spectrum, while a high shelf affects frequencies at the higher end of the frequency range. They are more subtle than a pass filter by attenuating the sound below the curve rather than removing it entirely.
Which Came First, The Chicken Or The EQ?
While EQ and dynamics work well together, there are no rules as to what that relationship should be.
Like everything in broadcast, treatment will depend on both the audio source and the motivation for the output. Every live signal is unique and unpredictable, and there are multiple factors which will determine how they are treated.
Even simple things like Microphone placement can make a huge difference to the level and frequency content of the signal, while whispering, shouting, and singing all change the level and frequency spectrum of the input. Different microphone types play a big part, such as ribbon mics with their high frequency roll off and more pronounced lower frequencies if the subject is closer to the mic.
All this means that processing ordering must fit the workflow and what the broadcast demands of it.
With an ‘EQ before compression’ model, boosts or cuts to the EQ will change the level of the signal going into the compressor, and an engineer may have to keep adjusting the threshold of the compressor. If a source is going to need EQ tweaking over time due to changes in the surrounding environment, this might not be a good option. Tweaking the EQ after the compressor gets rid of this issue, but means that artifacts present in the incoming signal can drive the compressor in ways that aren’t helpful.
These relationships are complex and sound operators may often choose to correct issues with EQ pre-compression, and sculpt the sound for aesthetic reasons post-compression. In live broadcast the EQ will often be used to give some gravitas to a voice, or to notch out an annoying low-end rumble or mains feedback.
It is easy to fall into the trap of assuming that EQ is about creatively boosting the frequencies which sound good. It can be better to approach EQ as a subtractive, corrective tool - removing any unwanted or harsh frequencies to reveal and therefore emphasize the desired, pleasing frequencies. Once unwanted artifacts have been sculpted away, sparing use of frequency boosting usually works well.
Ultimately, while complimenting each other beautifully, EQ and compression do very different jobs. Perhaps a better way to look at them is not as such a linear process, but to assess whether an EQ-ed signal needs any compression, or whether a compressed signal needs any EQ.
You know, by critically listening to it. Either way, the constant is that the sound engineer is always listening.
Supported by
You might also like...
Microphones: Part 2 - Design Principles
Successful microphones have been built working on a number of different principles. Those ideas will be looked at here.
Microphones: Part 1 - Basic Principles
This 11 part series by John Watkinson looks at the scientific theory of microphone design and use, to create a technical reference resource for professional broadcast audio engineers. It begins with the basic principles of what a microphone is and does.
Audio For Broadcast: Cloud Based Audio
With several industry leading audio vendors demonstrating milestone product releases based on new technology at the 2024 NAB Show, the evolution of cloud-based audio took a significant step forward. In light of these developments the article below replaces previously published content…
Next-Gen 5G Contribution: Part 2 - MEC & The Disruptive Potential Of 5G
The migration of the core network functionality of 5G to virtualized or cloud-native infrastructure opens up new capabilities like MEC which have the potential to disrupt current approaches to remote production contribution networks.
Next-Gen 5G Contribution: Part 1 - The Technology Of 5G
5G is a collection of standards that encompass a wide array of different use cases, across the entire spectrum of consumer and commercial users. Here we discuss the aspects of it that apply to live video contribution in broadcast production.