John Harris became a music mixer for broadcast television at a time when there was no such job. In the decades since he’s won 12 Emmys, three Grammys and a Peabody Award and has been at the forefront as the industry has made the transition from stereo to 5.1 surround and now immersive audio.
“A lot of the same mental processes go on, whether you have 22 channels or two,” says Harris, who has built an extensive list of TV music mixing credits over the past 30-plus years. During that time, he’s mixed music specials, concerts and documentaries by the dozens as well as scores of events such as the Rock and Roll Hall of Fame and the annual Grammy Awards presentation, which introduced a 5.1 mix for viewers in 2003.
Over the past year or so, Harris has been preparing for the U.S. broadcast industry’s wider adoption of immersive audio at his home studio in Pennsylvania, which is outfitted for mixing in Dolby Atmos. In early 2020, at the start of the pandemic lockdown, he partnered with music mixer Jody Elff, who also has an Atmos mix room at his home in New York, to offer remote broadcast mixing and recording services through their company, HEAR (Harris-Elff Audio Resources).
Choose The Best Seat
Harris essentially approaches a TV music mix from the viewpoint of an audience member in one of the best seats in the house. “If you go to see a show and you are in seat K15, right there in the sweet spot, that is really what I’m trying to do, in my own way. So everything but the speakers in the front I deem as ‘experience’ speakers,” Harris says.
“What is the experience that I want to create? I start out in stereo, because the band is in front of me. When I start, the desk is diverged all the way. There’s nothing but left and right until I decide to put something in there. But now, how do I make it cool? I’m going to make it cool a bunch of ways, and they all use my ‘experience’ speakers,” he says.
In 5.1, and especially Dolby Atmos, he says, “You now have this amazing palette of experiential tools. The music doesn’t need to be behind, above or below you for it to be an amazing show. And I can move the experience of the room around; I can move that space as far forward or back as I want to, and the same with the height.”
There is a school of thought that suggests capturing the ambience of a space for Dolby Atmos presentation by hanging microphones for the height channels at points determined by their distance from the source and from the floor and walls of a venue. “Which I don’t subscribe to,” Harris says. Many of the live music events he mixes for TV are in arena-sized venues with a large PA system pointing at the audience. “The truth is that you can have the greatest live house mixers — and I work with all of them — and a room at 60 feet off the floor will not sound good. It's a big metal building that somebody probably built for hockey or basketball, and it’s meant to generate excitement from all spots in the room; that’s the point, the cheering and participation of people.” But that doesn’t necessarily translate to a good immersive music mix.
Mixing A Sense Of Space
As he has been experimenting with immersive mixing at his studio, Harris has instead found another way to use the height channels available to him with Atmos to deliver a sense of space and the excitement of the crowd to viewers at home. “When you’re in the hall, you can hear the people in the nosebleed seats. They’re probably louder, because they paid a lot of money for that terrible seat and they’re so excited to be there that they make even more noise.”
In his trial immersive mixes, he says, “As soon as those people applaud, now you hear them; you’re enveloped in the experience. They’re cheering and reacting, surrounding you. So now I can use that experience, from low to high. That’s what I’m finding fascinating with my experiments here at my studio at home.”
Overall, he says, “My job is to make the musicians sound great.” In the room on the night the band may already sound great, but Harris can make it something really special for the home audience. There is one thing above all that he has taken away from his conversations with renowned recording engineer and mixer Chris Lord Alge, he says: “Is it cool or not? Because if it’s not, you’re doing it wrong. Nobody wants it to be what it really is and true to nature. We’re creating a whole new version. I get to put the band together and take them apart a little, in a pleasing way, to make them feature.”
For instance, there may be a signature instrumental lick — say, a synthesizer or a guitar line — that he can make stand out. “All of a sudden it starts popping around you. You go, ‘How cool is that?’ I’m able to say completely in my lane, but I can play with it.”
That approach has its detractors: “All the couch-sitters will say that that doesn’t happen in nature, but that’s such baloney. You are meant to entertain, to create excitement. I stand on the stage all the time with bands, and when you’re standing where the lead singer stands and the band is playing, it’s really cool. But that’s not what they want on the air; that’s not what anybody wants on the air.”
Creating An Immersive Setup
One tool at his disposal is to widen the stereo image, using the additional speakers available in an immersive setup. “Now my stereo can be very wide, and it doesn’t have to be psychoacoustic; I can really widen it out.” In a 5.1 setup, Harris says, “As you would move something out a little further past the left and right [speakers], the contributing partner was the speaker behind you; that would be a little disconcerting. But 7.1 gives you a bigger front” in the Dolby Atmos configuration, where there are speakers to either side (and above, of course) and not only behind the listener, as in a 5.1 setup.
“On a thing I was just working on, a very organic blues-rock performance, the guy is playing a Hammond B3 [organ],” Harris says, offering an example of widening the image. “It’s in its place in the band, but I moved him over to another set of faders. When he did his solo, I didn’t even turn him up, I just brought in these widened parallel channels. It didn’t get louder; it gave it more space and gave it such excitement.”
When 5.1 was first introduced, musicians, especially vocalists, were understandably concerned that people could unplug all but the center speaker and listen to only the vocal. As a result, music mixers working in surround and immersive formats generally favor a phantom center channel. “The center channel does not play a big part in my life,” Harris says. “The middle of the image is probably based on left and right front and left and right side. But very few things are in the middle.”
Where To Mix
What, if anything, does he send to the center channel speaker? “The crowd is in the middle. If I’ve got some sort of singalong where everybody knows the words, I’ll take the front row of microphones — two or four on the deck [stage] pointing out — and I’ll bleed those to the middle. The reverb will definitely be in the middle. Almost all of my ‘verbs have a mono in, seven out, so I let them do that, because they’re so good at it. But no one wants themselves solo’d anywhere, in a rock band, at least. So it can be an effect, it can be an experiential channel, but it can’t be a feature.”
As one of his “experience” speakers, the center channel can add further excitement to that organist’s performance, Harris says. “When he hits the pedals and it goes down low, everything from 150 Hz goes in the middle and to the sub. As he moves around the organ it’s really exciting. So I use the center as more of an experience than anything, because I don’t need the center to do very much work. A guitar solo is never going to work in there. Certainly, the lead vocal’s not going to work in there. And the kick drum is not going to broadcast well there.”
There’s another reason not to rely too heavily on the center channel, Harris adds. “I just got a new Sony super-duper TV. It wants you to let it be the center channel.” The technology, which Sony calls Acoustic Surface, generates sound directly from the center of the screen and can replace the center channel speaker, which would typically be positioned above or below the display. “I haven’t tried it yet, but people are going to, and your music is going to get weird,” he says.
You might also like...
Time base correction is an enabling technology that crops up everywhere; not just in broadcasting.
Digital audio interfaces were developed as a way of avoiding generation loss between devices.
The Need For Sync - In any audio environment where multiple pieces of equipment are being used together, it is imperative that they are synchronized. If they do not run in time with each other it will be difficult -…
As broadcast facilities and other organizations that use media to educate and inform continue to carefully make the move to video over IP, they currently face two main options, with a range of others in the wings. They may opt f…
“You need to be very predictable with the broadcast at all times. When I started doing this you had to be really careful with 5.1; there was no standardization,” he says. Indeed, for a long time, as broadcasters began to switch to …