Audio For Broadcast: Analog Vs Digital

The basic principles of the science of how sound works, tackled from a contextual perspective, through a discussion of the evolution from analog to digital audio technology.

This is the story of how broadcasters made the shift from analog to digital audio. It is arguably the most emotive shift in broadcast production in recent years, and it has transformed the audio equipment that is used in modern broadcast workflows.

Some people will tell you that once upon a time everything made sense and that everything was analog. Processes were linear; signals would go in, they would get tweaked and grouped, and signals would go out. Everything was routed with physical patch bays and signals could be traced by following an actual cable.

All sound is analog. Sound is just the vibration of air particles; it is a continuous process which allows the ear to pick up every slight change in pressure with no bandwidth limitations. Bandwidth is the range of frequencies covered in a continuous band, and an average human can perceive from 20Hz to 20kHz. But analog audio isn’t bound just by what we can hear – for example, dogs can hear up to 60 kHz – and analog manipulation of those sounds is just as unconstrained.

Recorded and broadcast sound converts the vibration of air particles into electronic signals for transport and manipulation, and for many decades the broadcast production chain used analog circuitry to provide a perfect representation of the sound that was captured. Microphones change the sound into an electronic signal, the audio console manipulates that signal across the same frequency range, and then outputs it again.

In the early 1980s the introduction of hybrid broadcast consoles provided the ability to digitally control analog signal paths – memories, snapshots and automation enabled audio engineers to be more flexible, and while the signal path was still analog, the seeds were sown. Digital audio for live broadcast wasn’t far away, and as digital television became a focus for broadcasters in the 1990s, digital tools became more common.

There Are Only 10 Types Of People In The World

There’s a huge amount of technical theory on digital audio, but let’s keep things very simple.

Unlike that unconstrained analog audio, digital audio is an approximation of the sound rather than the full, flowing, continuous range. Sound information is sampled at fixed points on the sound wave, and to ensure a faithful representation the sampled audio bandwidth has to be restricted.

At any moment in time the value of a digital signal can be measured precisely, and digital audio is created by periodically sampling the incoming analog signal. How often these samples are taken over time is referred to as the sample rate. A recording at one hertz means one sample is taken per second, and sample rates are measured in kHz; CD’s use a sample rate of 44.1 kHz, broadcast digital audio tends to operate at 48 kHz, and HD audio is commonly assumed to be 96 kHz (but at any rate must be higher than 44.1 kHz).

Digital systems are binary. They store information in strings of zeros and ones, and it is the job of an analog to digital convertor (an A/D or ADC) to covert that signal. The maximum length of the string for each sample determines the total amount of information that can be stored for the sample – which is referred to as the bit depth or word length. A 16-bit sample can contain 65,536 digits, whereas a 24-bit sample can contain 16,777,216 digits. A 24-bit sample can therefore contain a far higher resolution representation of the audio signal. The main effect of this in digital audio is that a 16-bit sample has a maximum dynamic range of 96dB whereas a 24-bit sample theoretically has a maximum dynamic range of 144dB. In reality, audio converters commonly found in today’s technology cannot achieve 144dB dynamic range; 120dB is more realistic with a good quality converter.

Bit depth should not be confused with bit rate, which refers to the number of bits transmitted per second.

Whatever is left out at this stage can’t be added in later, which means any digital signal is only as good as the A/D conversion from the original analog sound.

In order to recreate the signal it must be passed through a digital to analog convertor (a D/A or DAC) at the other end. The Nyquist Principle from Swedish engineer Harry Nyquist states that if you sample at twice the maximum frequency of the signal being sampled, the DAC will render an output waveform identical to the input waveform. So, if a sampled audio system is required to carry signals up to what a human can hear - 20kHz - the sampling rate must be at least 40kHz.

This explains why digital audio has to have a restricted bandwidth, and also why those poor dogs aren’t enjoying broadcast content as much as they could be.

Digital Broadcast Consoles

Digital audio provided an opportunity for sound designers to achieve much more in broadcast, and early adopters began to install digital broadcast consoles into audio control rooms in the 1990s.

They were flexible, they benefited from cumulative software updates, they were more powerful, and they had huge I/O matrixes. They also simplified installation costs by using less cabling, and once signals had been digitised, they remained in the digital domain throughout the production chain which made for easier integration with digital video systems.

Once in that environment, Digital Signal Processing (DSP) is used to manipulate those signals in real time. By this point the signals are all numbers – ones and zeros – and the DSP manages those all mathematically.

The good thing about DSP is that is it endlessly adaptable, and its firmware can be programmed to do very specific jobs which enable designers to develop new features and build more value into a product in a relatively short space of time.

For many years most commercially available consoles used the same off-the-shelf floating-point chips for DSP, and as capacity increased so did the number of chips required to process channels. As broadcast mixes became more complex more chips were required, with more PCBs, more backplane activity, and greater potential for on-air failure.

As broadcasters prepared for HD, the onset of televised 5.1 surround sound multiplied the number of required audio channels further still. Now, for every two-channel stereo input, broadcasters needed to provide a six channel 5.1 input, with a stereo downmix for legacy formats.

Open The Gates

Conventional DSP systems were limited by the number of signals passing between DSP cards, fighting for space on the backplane along with I/O. They were limited by backplane speed and took up more space.

Broadcast was grateful adopter of Field Programmable Gate Arrays (FPGAs), and broadcasters still use them for DSP processing today. FPGA’s are blank chips which provide a canvas to create processing structures to perform very specific tasks. FPGAs meant that processing power could be tailored to exceed the channel numbers which were possible with traditional DSP chips. It was a total step change, a paradigm shift.

It also changed the way people thought about DSP for audio; FPGA’s don’t come with any limitations on bit-depth, and provided some manufacturers an opportunity to select the number format (the bit depth) to meet the level of performance which was required by the function.

We’re Going To Need A Bigger Desk

This increase in capacity in turn drove the design of broadcast audio consoles, as the worksurface became the bottleneck and the ability to control and manage such a huge number of channel inputs became the limiting factor. Digital architecture gave broadcasters the ability to provide more immersive, more involving and more content, and hardware UI design adapted to fit these bigger workloads.

For large-scale broadcast audio processing, FPGAs are still the most efficient way to do things, using either on prem or edge hardware. But as cloud workflows become more acceptable and the benefits become more tangible, expect things to change again.

Analog audio still has its fans, and as any visit to an online audio forum will show you, they are vociferous.  The irony is that most modern output is a combination of the two – even if something has been recorded and mastered in a fully analog workflow, it’s most likely being streamed and listened to in a digital format, at whatever bit rate it has been converted to.

As consumers we’ve traded quality for convenience, and digital audio has allowed all that to happen.

Supported by

You might also like...

Video Quality: Part 1 - Video Quality Faces New Challenges In Generative AI Era

In this first in a new series about Video Quality, we look at how the continuing proliferation of User Generated Content has brought new challenges for video quality assurance, with AI in turn helping address some of them. But new…

Wi-Fi Gets Wider With Wi-Fi 7

The last 56k dialup modem I bought in 1998 cost more than double the price of a 28k modem, and the double bandwidth was worth the extra money. New Wi-Fi 7 devices are similarly premium-priced because early adaptation of leading-edge new technology…

NAB Show 2024 BEIT Sessions Part 2: New Broadcast Technologies

The most tightly focused and fresh technical information for TV engineers at the NAB Show will be analyzed, discussed, and explained during the four days of BEIT sessions. It’s the best opportunity on Earth to learn from and question i…

Chris Brown Discusses The Themes Of The 2024 NAB Show

The Broadcast Bridge sat down with Chris Brown, executive vice president and managing director, NAB Global Connections and Events to discuss this year’s gathering April 13-17 (show floor open April 14-17) and how the industry looks to the show e…

NAB Show 2024 BEIT Sessions Part 1: ATSC 3.0 And TV RF

A full-time chief engineer in good relationships with manufacturer reps and an honest local dealer should spend most of their NAB Show time immersed in BEIT sessions. It’s an incredible opportunity to learn from and personally question indisputable industry e…