Oswalds Mill Audio Monarch speaker system uses two-15" woofers, and baffles to extend and improve low frequency response.
Acoustic impedance is analogous to electrical impedance, and we all know that impedance matching is important in electronic systems. Here John Watkinson looks at the importance of acoustic impedance to loudspeaker design.
One of the biggest problems with sound reproduction of any kind is that the density of air is very low. In almost every kind of transducer we can make, the moving mass is dominated by the mass of the diaphragm which always exceeds the mass of the air we are trying to move by a large factor. We end up moving a large parasitic mass instead of a small intended mass and the result is bound to be inefficient.
In engineering parlance, the impedance of the air is low, so when we move it there is little resistance. Small resistance means that only a small amount of work is done. Figure 1(a) shows a woofer in free space. As its diaphragm is much smaller than the wavelengths it is radiating, it’s effectively omni-directional. In Figure 1(b) the same woofer is sitting on the floor. The solid angle into which it can radiate has now been halved by the presence of the floor.
Figure 1. In (a) sound source in free air radiates into a sphere. In (b) where the source is located on the ground, pressure radiates into a hemisphere. In (c) the source is located at the junction of a floor and wall and the sound pressure radiates into a quarter sphere.
The presence of the floor alters the acoustic impedance seen by the woofer, but because the air load is so small, it doesn’t significantly alter the mechanical impedance seen by the coil, so the amplitude of the diaphragm motion is unchanged. This means that the same power is radiated as before, except that because the solid angle is halved the intensity of the sound is doubled. The woofer appears to have become more efficient, whereas in fact we have given it what an antenna designer would call forward gain, by directing its energy. You can do the same trick putting a mirror behind a light bulb. The light intensity is doubled by the virtual light bulb behind the mirror.
It’s common for acousticians to model sound sources near walls by putting a virtual sound source as far behind the wall as the actual source is in front. The wall then disappears. However, it is important to pay attention to the time domain, because the path length from the virtual source is longer. It is very easy to reach the wrong conclusions if the timing differences due to the path length are not allowed for.
As a loudspeaker is only a microphone working in reverse, a microphone placed on a large flat surface becomes twice as sensitive. That’s the principle behind the pressure zone microphone.
If the woofer is now placed in the angle between the floor and a wall, as shown in Figure1(c), the solid angle into which it radiates will now be one quarter of a sphere and the intensity will double again. Put the woofer in a corner and the solid angle becomes one eighth of a sphere.
We can hear these effects clearly using our own ears, by placing our head near a wall, in the angle between two walls, or in the small angle between two walls and the floor, in which such an undignified position can be excused by claiming to look for a contact lens.
Figure 2 (a) shows a legacy 2-way loudspeaker with a woofer that is omni-directional at low frequencies and a tweeter that beams sound forwards. Assume the speaker has been equalised so that the presence of the floor has been taken into account.
Lets consider what happens if the speaker is placed near a wall, as in Figure 2 (b). The wall doesn’t have much effect at all on the sound from the tweeter, because it is radiated away from the wall, whereas the efficiency of the woofer is doubled. The result is that the equalisation of the speaker is altered significantly and it becomes bass-heavy.
Figure 2. In drawing (a) the legacy speaker is equalised so that LF = HF. In drawing (b), the same speaker is then located near a wall. Note that the HF output remains unchanged but the LF level has increased.
Some loudspeakers offer the ability to adjust the relative level of HF and LF by moving plugs or links on the back panel.
Such adjustments are not necessary with a loudspeaker that is omni-directional at all frequencies. It is a characteristic of such speakers that they generally do not need to be equalised for their position in the room. One often hears that such speakers have to be placed a long way from walls. It’s not true, as a simple practical test will illustrate.
Figure 3 shows a mid-range unit mounted on the flat front baffle of a traditional box-shaped speaker. At 1-2kHz, the wavelength is larger than the diaphragm, so the mid range effectively radiates over a hemisphere and sees the appropriate acoustic impedance.
But what happens to the sound that is radiated parallel to the baffle? Eventually it reaches the corner of the box. But Figure 3 also shows that the corner of the box has access to three quarters of a hemisphere. The result is a change of acoustic impedance at the edge of the baffle.
Figure 3. Sound from a tweeter in the middle of a baffle radiates into a hemisphere, but at the corner of the enclosure there is access to three quarters of a sphere, which results in an impedance mismatch.
We know from video practice the importance of terminating cables, because an impedance change in a cable will cause a reflection. But a cable is one dimensional, whereas the acoustic problem is three dimensional. An acoustic impedance change causes diffraction, which is where re-radiation takes place from the location of the impedance change.
There are a number of interesting effects that follow from cabinet diffraction. The diffracted sound from the edges of the baffle interferes with the direct sound. The path length difference means that the sharp-edged baffle acts like a comb filter, making the frequency response periodic. At some wavelengths, the two sound sources are in-phase, at others they are out of phase.
If the directivity pattern is measured, as the test microphone rotates around the speaker, or vice versa, the path length differences change, with the result that at a fixed wavelength the polar response becomes periodic. As the frequency rises, the periodicity in the polar diagram increases until it looks like a hedgehog.
The solution to this problem that is in almost universal use is to smooth the polar diagram. That’s right, the way of dealing with the evidence that your speaker is suffering from diffraction is to suppress the evidence. This is like closing the curtains instead of weeding the garden.
One of the difficulties with cabinet diffraction is that a small change in listening position causes a large change in frequency response. This is one of the factors involved in the creation of the “sweet spot” from which the best sound is obtained in a legacy speaker. It should be noted that there is no sweet spot when listening to a real sound source.
Loss of stereophonic imaging
One of the most damaging results of cabinet diffraction is that the stereophonic imaging ability is impaired. Figure 4 (a) shows that ideal stereophonic imaging is obtained from point sources, which can create a virtual point source anywhere between the speakers. Figure 4 (b) shows that instead of a point source, the speaker has become a distributed source and the virtual source created between such speakers cannot be smaller than speakers. The result is smear; point sources of sound in the image are spread until they are the width of the speaker.
Figure 4. In (a), point source loudspeakers can create virtual point sources between them. Illustration (b) shows that distributed sources can only create smeared virtual sources. In (c), we see an ideal sound stage using pan pots and artificial reverberation. Finally, in (d) with reproduced on legacy speakers, the sound stage of (c) is smeared and the reverb is masked.
In this way the legacy loudspeaker imprints its own characteristics onto the sound. This is one of the reasons that loudspeakers sound like loudspeakers and a key source of listening fatigue. Loudspeakers should not have a sound of their own.
Figure 4 (c) illustrates a typical sound stage captured by an accurate stereophonic microphone, such as a coincident pair or created by pan pots and artificial reverberation. There are dominant small sound sources, perhaps vocalists or instruments, and between the sound sources can be heard the reverberation of the room.
In Figure 4 (d) we see that the sound stage, as reproduced by legacy loudspeakers, is suffering from smear. The small sources have been spread so that the reverberation is masked, with the result that the reproduction sounds dry.
The effect of codecs
Many audio codecs seek to save data by removing low-level sounds on the basis that they can’t be heard. If this is done to the situation in Figure 4 (d), it will succeed, because the reverberation is already masked, so no one will know it’s gone. On the other hand, if the same codec is auditioned through a pair of true point source speakers, it will immediately be obvious that the reverberation is absent because it is not masked.
Two things follow from this. The first is that sub-standard audio codecs or inadequate bit rates came into use because legacy speakers were unable to reveal the problems. The second is that, like a codec, and a human ear, a real loudspeaker has an information capacity which can in principle be measured.
In Figure 5 we see the performance of the HAS needs to be equalled or exceeded for high quality reproduction. At (b) the quality of a codec stops increasing with bit rate. At (c) the quality stops increasing at a higher bit rate with a better speaker. In fact at (b) it is not the codec that is being tested, it is the speaker. This is how inferior speakers can mask the performance of inadequate codecs or insufficient bit rates.
In Figure 5 (a) the graph shows a codec being tested with an ideal sound source and ideal speakers. As the bit rate increases, the quality does at first, but once the information capacity of human hearing is reached, there is no further improvement. At (b) a legacy loudspeaker is in use and this time the sound quality fails to increase when the information capacity of the speaker is reached. Between curves (a) and (b), if you think the speaker is testing the codec, you are mistaken. In fact the codec is testing the loudspeaker.
This suggests a way of assessing loudspeaker quality. With a given codec, the higher the bit rate that is needed before the quality stops improving, the better the loudspeaker.
Today it is quite easy to obtain a CD containing a high quality version of a particular song, along with a compressed version, which can be a downloaded MP3. If a particular pair of speakers doesn’t reveal an obvious difference in sound quality between the two sources, the speakers are no good.
John Watkinson, Consultant, publisher, London (UK)
Watkinson has now written 13 chapters in his on-going treatise about loudspeakers. A link to Part 1 in this series can be found below. All parts in Watkinson’s loudspeaker series can be found by searching for “Watkinson” from the search box on The Broadcast Bridge homepage.
John Watkinson has a new book readers may wish to view.
In The Art of Flight, John Watkinson chronicles the disciplines and major technologies that allow heavier-than-air machines to take flight. The book is available from Waterstones Book Store.
You might also like...
Every Super Bowl is a showcase of the latest broadcast technology, whether video or audio. For the 53rd Super Bowl broadcast, CBS Sports will use almost exclusively IP and network-based audio.
Richard Devine creates sound assets for companies such as Apple, Microsoft, Google and other Silicon Valley giants, as well as content companies like Sony Media and the video game, Doom. These range from individual sounds to complete music tracks.
High fidelity speakers for the home environment differ from professional audio monitors due to their sonic accuracy. In the studio, we want to hear mistakes in the audio and not have the speakers cover them up. At home, we want…
Audio is arguably the most complex aspect of broadcast television. The human auditory systems are extremely sensitive to distortion and noise. For IT engineers to progress in broadcast television they must understand the sampling rates and formats of sound, and…
As audio facilities move away from professionally built studios to homes, offices and other locations, the precise matching of listening components becomes more tricky. Many of these makeshift studios use budget monitors with no software for matching with the acoustic…