ON TARGET... with all the latest IBC 2017 show information and product coverage, by the editors of The Broadcast Bridge - filtered by category.
Click here

Applied Technology: Dolby E Workflow – From Content Creation to Playout

From creation to broadcast to transmission, audio content requires similar processes but with different variations depending on where the content is in the workflow. This article will briefly discuss content requirements, workflows, and the role that Dolby E has as a part of this transmission chain. In each case, Dolby E based processing and other audio workflows are required.

Let us first consider the actual workflow used by three separate customers that are preparing content and sending it to one another. Content creation begins at a Hollywood studio, is then delivered to a broadcaster, and then the files are delivered to a playout centre. Whilst this particular example is an actual and real world implementation, there are many similar potential use-cases all over the world today.

Why Dolby E?

Videotape recorders and early playout servers only provided support for 4 mono audio channels. Users needed more audio channels. Dolby E was invented to efficiently package 8 channels of audio in a 2 channels. Dolby E also allows metadata relating to information about the program and the audio characteristics to be inserted into the data stream thereby allowing that information to be preserved as the content goes from production, post production and broadcast transmission. Although Dolby E was designed to work in a hardware based, real time environment, Dolby E is equally applicable and needed in file-based environments.

Dolby E can be used anywhere in the broadcast chain prior to the point where the audio will be delivered to consumers. Dolby E is never delivered to end users, but is instead decoded back to baseband, and then encoded into the required consumer format, such as Dolby Digital.

While file-based workflows proliferate, the Dolby E implementation provides challenges. There are a vast number of hardware based solutions but precious few for file-based operations.

Dolby E details

Dolby E can be encoded as 16-bit or 20-bit words. When 16-bit encoding is chosen it is only possible to encode four or six tracks of audio. 20-bit word length is needed if you wish to encode the maximum of eight tracks of audio.

This has an impact on the file type that will be used to contain the Dolby E encoded data. File types such as MXF or MOV usually have audio tracks for 16-bit words or for 24-bit words. It is not possible to include Dolby E encoded audio with a 20-bit word length into files that are created for 16-bit audio, and files with 24-bit word length would be needed to carry Dolby E audio encoded as 20-bits. Note however that conversely, 16-bit Dolby E can be placed inside media files containing 24-bit audio without limitation.

Dolby E uses a frame structure for the audio, and the audio frames must line up with the video frames in order to ensure that frame accurate editing is possible, and for switchers to be able to change source in a real time environment, without glitches in the audio. The guard band is used to ensure the video and audio data alignment is acceptable.

It’s important that file-based audio processing systems are able to measure existing Dolby E guard band and correct it when needed.

Use-case one: the content provider

A major Hollywood content provider needed to deliver content that required a number of audio workflows involving loudness compliance, track mapping, and Dolby E encode. Because the facility needed to create a dozen or so different workflows, they had a specific challenge.

They had to encode Dolby E with two key requirements: the first was to insert into every program they deliver, the program name inside the Dolby E container; the second was to automate the workflow process which required extracting the program name from the file name. Previously, this bit of the operation was manual and time consuming. We wrote additional code in our product, Emotion Systems' Engine, that enabled this as a part of the workflow and therefore allowed full automation of all the workflows.

Below is an example of the key workflow that the content provider required:

Emotion Systems' Engine Dolby Encode with Program Name. Click to enlarge.

Emotion Systems' Engine Dolby Encode with Program Name. Click to enlarge.

Use-case two: the broadcaster

This broadcaster received content from multiple suppliers. The result was a need for more than 20 different audio processing workflows. Whilst their post product department used edit suites for some of these workflows, it was a costly and time consuming affair. A particular challenge was the need to change the stereo inside a file that was already pre-encoded to Dolby. They had to do this because they were adding a commentary (AD – audio description, also known as video description in USA) track, which is usually done later in the processing chain. This complex process was not performed in house but rather outsourced to a post house.

The broadcaster would send the file to a post house, which added cost and time. When they evaluated the Engine, they immediately saw the potential to automate, to save money and make the process consistently repeatable. The result is that all the 20 plus workflows have been automated prior to delivery to the playout provider.

Using Engine to replace the audio description within a Dolby E signal. Click to enlarge.

Using Engine to replace the audio description within a Dolby E signal. Click to enlarge.

Emotion Systems' Engine audio description replacement workflow. Click to enlarge.

Emotion Systems' Engine audio description replacement workflow. Click to enlarge.

Use case three: the playout centre

MAM based automation is used to manage Dolby E and other loudness processes for a complete “end to end” workflow.

In spite of the automated workflow, the broadcaster was supplying content to the playout center with 14 different variations of audio. The playout center required the content to be normalized to stereo - stereo - Dolby E - Dolby E. In order for the playout provider to normalize the files and make them suitable for the playout server, 14 different flavors of audio workflows were required.

Normalizing files manually would be so labor intensive as to be nearly impossible. The process of normalizing to the desired format for the playout centre is now under MAM control and completely automated, saving countless hours and pounds.

Using Engine to normalise files for playout. Click to enlarge.

Using Engine to normalise files for playout. Click to enlarge.

Emotion Systems' Engine Playout workflow. Click to enlarge.

Emotion Systems' Engine Playout workflow. Click to enlarge.

Automated file-based processing for Dolby E

Keeping the Dolby E metadata intact and enhancing as needed along the way is a key requirement for a transmission chain. Loudness correction and other operations that are presently implemented through manual processes in edit suites are cost prohibitive and not scalable. With the rising amount of content that’s being generated and transmitted, new methods must be found for efficiency and high quality processing.

Engine from Emotion Systems is designed to enable scalable workflow automation and has a GUI-based interface enabling the creation of a wide range of workflows. The signal processing algorithms provide consistent and predicable results.

Let us know what you think…

Log-in or Register for free to post comments…

You might also like...

Sennheiser Microphones Capture Performances in AMBEO at New York City’s Chelsea Music Festival

Performances at St. Paul’s German Lutheran Church as part of the Chelsea Music Festival were recorded in Sennheiser’s AMBEO three-dimensional audio format by stereo pioneer Jim Anderson’s production company.

Articles You May Have Missed – July 26, 2017

Virtualisation is being touted by some vendors and consultants as the ultimate solution to handling media. Yet, for many engineers, it remains an untested answer to their production and playout needs. Here are two articles that provide important insight into…

Loudspeaker Technology Part 9: Methods of Controlling Directivity

The need for good directivity in loudspeakers cannot be sufficiently emphasised and remains an area that speaker designers sometimes neglect. The result destroys acoustic realism and makes loudspeakers sound like loudspeakers instead of like the original sound.

Using a Bell Curve for Speakers, Mix Audio for the Masses

In creating a balanced audio mix, we are constantly reminded to listen to the results on the widest range of speakers possible in order to get well rounded sound. This is easier said than done, since speakers sound different and…

UK HPA Tech Retreat Report - Day 3

Tuesdays HPA Tech Retreat was all about 360 and VR, and Wednesday focused on the versioning explosion. On the final day, delegates were given a summary of the current state of the industry, and the influences of artificial intelligence on media…