How Audio Normalization Works

Audio normalization is the application of a constant amount of gain to a recording with the goal of bringing the amplitude to a target level. The signal-to-noise ratio and relative dynamics of the audio remain unchanged in the process because the same amount of gain is applied across the entire recording. Is it a good practice to normalize audio? The answer is: sometimes.

There are two types of audio normalization, a function normally found in most digital audio workstations (DAWs). Peak normalization adjusts the recording based on the highest signal level found in the recording. Loudness normalization adjusts the recording based on perceived loudness. Both types of normalization adjust the gain by a constant value across the entire recording.  With peak normalization, the gain is changed to bring the highest PCM (Pulse Code Modulated) sample value or analog signal peak to a given level – usually 0 dBFS, the loudest level allowed in a digital audio system. Since peak only searches for the highest level, it alone does not account for the apparent loudness of the content.

Peak normalization is mostly used to change the volume to ensure optimal use of available dynamic range during the mastering stage of a digital recording. When combined with compression/limiting, peak normalization becomes a feature that can provide a loudness advantage over non-peak normalized material. This feature of digital recording systems, compression and limiting followed by peak normalization, enables contemporary program loudness.

Loudness normalization is when the gain is changed to bring the average amplitude to a target level. This average can be a measurement of average power, such as the RMS value, or it can be a measure of human-perceived loudness. This type of normalization was created to contend with varying levels of loudness when listening to multiple songs in a sequence.

Loudness normalization can result in peaks that exceed the recording medium's limits. Software offering such normalization normally provides the option of using dynamic range compression to prevent clipping when this happens. This considers the overall loudness of a file. There may be large peaks, but also softer sections. It takes an average.

Audio should be normalized for two reasons: 1. to get the maximum volume, and 2. for matching volumes of different songs or program segments. Peak normalization to 0 dBFS is a bad idea for any components to be used in a multi-track recording. As soon as extra processing or play tracks are added, the audio may overload.

It should be remembered that audio normalization is a destructive process. Performing digital processing to a file is going to change it in some way. The poor reputation for normalization came in the early days of digital audio when all files were 16-bits. If the volume was turned down, it reduced the bit depth. Historically, as digital audio quality improved, normalization no longer degrades the audio’s quality. Now it is more like turning up the volume — nothing more.

Whether to use normalization at all depends on the program content and the skill of the operator. If gain staging has not been done properly, maxing out the audio can bring in negative artifacts. When gains are normal, however, normalization can be beneficial. But remember, times have changed. Many streaming services now adjust music levels in their own facilities. So know where your audio is headed before making a decision to normalize.

Normalization is a tool, whose results depend on the skill of the person using it. It can easily be abused and cause an unnecessary loss of sound quality. So as with any tool, use it with caution. Know what you are doing. 

Let us know what you think…

Log-in or Register for free to post comments…

You might also like...

Simple Fixes For Poor Sound In Podcasts

With the growing popularity of low-cost podcasts comes a slew of mistakes that hamper the audio quality of the production. Many of these snafus occur because there is no skilled engineer behind the scenes. With a minimum of knowledge, they…

360 Sound Comes Back With A Roar

Back in the early 1980s, Hugo Zuccarelli demonstrated Holophonics to crowds waiting in long lines at a trade show in Los Angeles. His headphone-based 360-degree spatial audio system was startling in its detail. When the sound of scissors cut a…

The Sponsors Perspective: Production Automation For Immersive Audio

Lawo’s Christian Struck looks at the potential for production automation in immersive sports broadcasting, and how it can help move towards a personalized, object-based experience.

The Sponsors Perspective: Immersive Audio Monitoring

Genelec Senior Technologist Thomas Lund moves the monitoring discussion on to the practical considerations for immersive audio, wherever you are.

Essential Guide: Immersive Audio Pt 4 - Options And Tools For Production Of Live Immersive Content

In this fourth installment of the Immersive Audio series we investigate the production tools needed to produce live immersive content. Moving from channel-based output to object audio presents some interesting challenges as the complex audio image moves around in three-dimensional…