Practical High Dynamic Range (HDR) Broadcast Workflows - Part 1

HDR is taking the broadcasting world by storm. The combination of a greater dynamic range and wider color gamut is delivering images that truly bring the immersive experience to home viewers. Vibrant colors and detailed specular highlights build a kind of realism into broadcast productions that our predecessors could only ever have dreamed of.



This article was first published as part of Essential Guide: Practical High Dynamic Range Broadcast Workflows - download the complete Essential Guide HERE.

To completely understand how we can leverage the benefits of HDR we must look deep into the HVS (Human Visual System) to gain insight in to exactly what we’re trying to achieve. At first this may seem obvious as we want to improve the immersive experience, but television, like all things engineering, is a compromise. Consequently, understanding the trade- offs between what we can achieve and what is required is critical to delivering the immersive experience.

The HVS is a complex interaction of the physical sensors in the eye, the visual cortex, and the psychology of how we perceive moving images. The HVS responds differently to still and moving pictures, and to static and dynamic range.

Then there is the color to consider. Although a greater dynamic range can be achieved in the luma domain, to deliver the optimal viewing experience, we’ve expanded the color gamut to greatly extend the greens as well as improving the reds and blues. If done correctly, the pictures will look outstanding.

Making HDR work in production workflows, whether live or pre-recorded, requires us to reconsider our established and proven working practices. Although program makers will want to move as quickly as possible to delivering HDR productions, there are an unimaginable number of televisions already in people’s homes that are not HDR compatible. This leads to maintaining backwards compatibility and the challenges associated with it.

Two dominant HDR systems are evolving; HLG (Hybrid Log Gamma) and PQ (Perceptual Quantizer). Both have their advantages, and both work well in broadcast workflows. However, each has its own idiosyncrasies and tends to lend itself to a particular method of working. Again, understanding the HVS helps us decide which system is better for our particular use-case.

“Scene referred” and “display referred” are terms and concepts that have existed in television and film since their inception, but it’s only recently that as broadcasters, we have had to consider the differences between them and the consequence of the true impact on broadcast workflows. This has led to the concept of metadata and a whole new vocabulary has crept into the broadcast community to help optimize images for different types of television.

HDR has not only forced us to rethink our approach to workflows but also how we monitor signals. Peak white isn’t as obvious as it was in the days of standard dynamic range and gamut detection is more important now than ever, especially as we’ve moved to a much wider color space.

These articles take us on a journey of understanding to discover what exactly we are providing with HDR and why. The practical aspects of the HVS are considered along with the requirements of the broadcast HDR workflows.


HDR is gaining incredible momentum in broadcasting, but the revolution isn’t just about higher dynamic range in the luma, it also embraces a much higher chroma space to deliver outstanding vibrant colors, more presence, and a deeper immersive viewing experience. Although the pictures may look outstanding, creating them requires a deeper understanding of the underlying technology and the systems to monitor.

To create a more immersive viewing experience, broadcast innovators have been attempting to replicate nature as much as possible and bring the outside scene into our homes. This includes a greater increase in the difference between the highlights, and the details in the shadows with smoother blacks that produce the dynamic range in the image. Although we are far from truly replicating nature, increasing both the luminance range of HDR and the associated color gamut of DCI-P3 and BT.2020, delivers the optimal viewing experience.

NITs And Candela’s

Traditional standard dynamic range (SDR) uses fixed signal voltage levels to define peak white and black, but as we move to HDR we tend to think more in terms of light levels. The term NIT is a non-SI unit but has been adopted by some in the television and broadcast community to refer to the SI measurement of luminance. One NIT equals one candela per meter squared (1 NIT = 1 cd/m2).

Cathode Ray Tube (CRT) television typically had a brightness of 100 NITs (100 cd/m2), OLED has a maximum of 600 – 700 cd/m2 with modern LCD and QLED screens easily reaching 1,000 cd/ m2 to 1,500 cd/m2, and the new Sony 8K monitors have been reported to reach 10,000 cd/m2 (although this is not yet commercially available). However, great care must be taken in interpreting these specifications. Sometimes, vendors do not always specify whether the maximum brightness value refers to the whole screen or just parts of it.

This is particularly interesting when we look at what HDR is supposed to do as opposed to what we can make it do. Technically, it would be possible to display alternate black and white stripes of 0 cd/m2 and 1,000 cd/m2 on a monitor. However, the intense brightness of the 1,000 cd/m2 bar and the contrast it provides compared to the 0 cd/m2 bar would likely cause discomfort for the viewer.

Instead, it is the specular and transient highlights that display the 1,000 cd/ m2 (and beyond) levels. It is perfectly possible to provide a display that can light large parts of the panel with high level display, but this may create discomfort for the viewer and require potentially huge power supplies making significant demands on a home-owners electricity supply.

Human Vision System (HVS) Requirements

The concept of dynamic range has two components; the ability of a system to replicate a wide difference between the highlights and the lowlights, and the effects on the human visual system. A home SDR television set can display approximately 6 f-stops and professional SDR about 10 f-stops. Each increase in f-stop is a doubling of brightness. In this instance a display with 6 f-stops gives a range of 64:1 and a display with 10 f-stops gives a range of 1024:1.

Figure 1 – Human luminance detection is formed by the scotopic (rods), mesopic, and photopic (cones) receptors of the eye. The mesopic region uses combined rod and cone response to give a cross over between them.

Figure 1 – Human luminance detection is formed by the scotopic (rods), mesopic, and photopic (cones) receptors of the eye. The mesopic region uses combined rod and cone response to give a cross over between them.

Research has demonstrated the human visual system can adapt to a range of 10-6 to 102 cd/m2 for the scotopic light levels, where the rods dominate, and 10-2 to 105 cd/m2 for the photopic light levels, where the cones dominate. The mesopic area covers the overlap between the scotopic and photopic light levels from 10-2 to 10 cd/m2. This gives a complete range of 10-6 to 105 cd/m2 or 10,000,000,000:1, approximately 33 f-stops.

The HVS is only able to operate over a fraction of this range due to the various mechanical, photomechanical and neuronal adaptive processes that move the range to facilitate the appropriate light sensitivity, thus allowing the HVS to be maximized under any light conditions. This reduced range is referred to as the steady-state dynamic range.

One reason for this reduction in dynamic range is that even with 33 stops of range, the brightest objects have a much higher luminance than the top of our range.  For example, the sun has a luminance level of approximately 109 cd/m2. An example of how our HVS automatically adapts is seen when we look out of a bright window and then into a dark room. Our HVS is quickly and automatically adjusting between the two scenes to give the perception of a much higher dynamic range.

Our static dynamic range may only be 11 stops, but this automatic adjustment effect gives the perception of the range of 14 stops, or even 20 stops in the right lighting conditions.

Supported by

You might also like...

Brazil Adopts ATSC 3.0 For NextGen TV Physical Layer

The decision by Brazil’s SBTVD Forum to recommend ATSC 3.0 as the physical layer of its TV 3.0 standard after field testing is a particular blow to Japan’s ISDB-T, because that was the incumbent digital terrestrial platform in the country. C…

Broadcasting Innovations At Paris 2024 Olympic Games

France Télévisions was the standout video service performer at the 2024 Paris Summer Olympics, with a collection of technical deployments that secured the EBU’s Excellence in Media Award for innovations enabled by application of cloud-based IP production.

Standards: Part 18 - High Efficiency And Other Advanced Audio Codecs

Our series on Standards moves on to discussion of advancements in AAC coding, alternative coders for special case scenarios, and their management within a consistent framework.

HDR & WCG For Broadcast - Expanding Acquisition Capabilities With HDR & WCG

HDR & WCG do present new requirements for vision engineers, but the fundamental principles described here remain familiar and easily manageable.

What Does Hybrid Really Mean?

In this article we discuss the philosophy of hybrid systems, where assets, software and compute resource are located across on-prem, cloud and hybrid infrastructure.