Next Generation Television brings with it audio options capable of providing superior and yet flexible audio features.
Monitoring is already recognised as a crucial issue in the implementation and operation of Next Gen Television systems. While the vision stream poses several major challenges for checking and quality control (QC) technology, sound could potentially be even trickier due to the different aspects of Next Generation Audio (NGA), which forms a major part of ATSC 3.0.
Immersive sound is the feature of NGA that most people would think of immediately but the format also covers alternative languages and commentaries plus a high degree of personalisation and interactivity. All of which need to be monitored, something that is already a challenge due to emerging systems being object-based rather than the purely channels approach of stereo and 5.1 systems.
These issues will be discussed at the 2019 NAB Show during the Broadcast Engineering and Information Technology (BEIT) conference session An Objective Guide to Audio Monitoring for Next Gen TV. The speakers, John Schur, president of the TV Solutions Group at the Telos Alliance, and Jim Starzynski, director and principal audio engineer at NBCUniversal, will present a brief overview of the latest developments before going on to examine specifics such as the space and acoustic limitations that can affect monitoring in OB trucks, channel restrictions on mixing consoles and different consumer platforms, including listening on headphones.
The ATSC 3.0 standard includes two audio codecs: MPEG-H 3D Audio and Dolby AC-4. Both are capable of delivering immersive sound plus a range of personalisation and interactive features. South Korea is the first country to have a broadcast system based on Ultra HD TV, conforming to ATSC 3.0 with MPEG-H 3D Audio. John Schur comments that trials of these technologies are also taking place in the US, where consumers and broadcasters alike are recognising their potential. "Some of the features of NGA are ones consumers are interested in," he says. "So there is motivation for broadcasters to adopt the technologies."
Interactivity is a feature of particular interest for a great many people, especially when it offers greater accessibility and practical benefits. "A specific use case is allowing hearing impaired or aging listeners and viewers to dial up the dialogue on a programme and bring down the effects," Schur says. "There is also the ability to select different commentaries for a sporting event. Broadcasters are able to monitor different playback devices and listening environments, such as headphone virtualisation."
Jim Starzynski, director and principal audio engineer at NBCUniversal.
Much of this is facilitated by the use of object-based audio and metadata, which are both key parts of the new breed of codecs and standards for Next Gen TV and NGA. In addition to checking the MPEG-H and AC-4 streams, Schur says operators also have to accommodate established audio systems. "We need to work out how to deal with monitoring and QC for traditional stereo and 5.1, which have been the primary broadcast formats for some time," he explains.
Schur observes that broadcasters have been limited in how they put together monitoring systems and tools for QC, with a long-standing reliance on technicians using their ears to check outputs for specific faults in conjunction with technology. "But now there are alternative languages, audio description and a need for tools to test for loudness compliance, silence detection and clicks and pops," he says. "NGA adds several new dimensions for monitoring and QC; just listening is not going to be enough any more. There are too many different ways the consumer can experience audio and it won't be possible [for technicians] to listen to all of those."
This, Schur says, is creating a need for "new and smarter" monitoring and QC tools to compensate for the reality that a human operator is now "not able to cover all the bases NGA has to offer". The different aspects involved in broadcasting true object-based audio, he continues, can take people by surprise and have not always been thought through. Because of this techniques are being "borrowed" from other technology areas that already demand a high degree of analysis due to their make-up.
"What we're talking about is AI, rule-based QC for any area that has a large amount of customisation and user interactivity, such as gaming," Schur explains. Modern video games involve multiple story paths and a variety of resolutions, all of which are available for selection as play continues. Broadcasters and programme producers are now considering this approach for non-linear TV platforms, as recently demonstrated by Netflix's Black Mirror: Bandersnatch interactive episode. Schur says broadcasting can learn from the monitoring and checking methodologies used to create a video game and apply them to TV production and distribution.
Not that Schur thinks a software only approach will fully supplant trained human operators carrying out QC processes: "There will probably be a combination of an operator and operator-assist tools. It's still very early days for NGA but ATSC streams are due to come on air later this year. Consumer NGA devices will be available next year, only with a limited set of functionality. We do have to start thinking about monitoring because there does seem to be some momentum building and we'll probably starting seeing more NGA features within three to five years."
An Objective Guide to Audio Monitoring for Next Gen TV takes place during the 2019 NAB Show on Tuesday 9 April from 2.30 to 2.50pm in Room N260.
Want to know more about this year's BEIT Conference? Click the link here to see the official schedule along with a snippet of information about each presentation.
Would a free exhibit pass help? Click this link or image below and enter the code MP01 at the correct prompt.
Here are some additional articles about Broadcast Engineering and Information Technology (BEIT) sessions taking place at NAB 2019 in which you may be interested.
You might also like...
We begin our series on things to consider when designing broadcast audio systems with the pivotal role audio plays in production and the key challenges audio presents.
Machine Learning is generating a great deal of interest in the broadcast industry, and in this short series we cut through the marketing hype and discover what ML is, and what it isn’t, with particular emphasis on neural networks (N…
Cloud native processing has become a real opportunity for broadcasters in recent years as the latencies, processing speeds, and storage capacities have not only met broadcast requirements, but have even surpassed them.
As IT and broadcast infrastructures become ever more complex, the need to securely exchange data is becoming more challenging. NATS messaging is designed to simplify collaboration between often diverse software applications.
For the past 15 years, Chris Shepard, chief engineer and owner of American Mobile Studio, has been responsible for the music mixes broadcast over a variety of streaming platforms from some of the biggest festivals in the United States, including Coachella,…