UHD and HDR—the Dolby View

Better pixels are coming, but there is much detail to be resolved. Will consumers be confused? Will a “4K” receiver also support high dynamic range (HDR), high frame rate (HFR) and wide colour gamut (WGC)? And where will the content come from? Over-the-air looks a little way off, but fibre broadband offers an immediate solution. Several organisations are trying to cut though all the options and to offer a simple proposition to the consumer. CES 2016 saw several announcements to this end.

One was from the Consumer Technology Association (CTA), formerly the Consumer Electronics Association (CEA), and the producer of the International CES. Back in August 2015 the association released a definition for HDR compatible displays, which included a media profile ‘HDR10’. This is defined by:

  • Electro-Optical Transfer Function (EOTF): SMPTE ST 2084 
  • Colour Sub-sampling: 4:2:0 (for compressed video sources) 
  • Bit Depth: 10 bit 
  • Colour Primaries: ITU-R BT.2020 
  • Metadata: SMPTE ST 2086, MaxFALL, MaxCLL 

At CES2106 a further announcement from the UHD Alliance of the ‘Ultra HD Premium’ specification cemented the definition of a consumer standard. A receiver which carries the ULTRA HD PREMIUM logo must meet or exceed the following specifications:

  • Image Resolution: 3840×2160
  • Colour Bit Depth: 10-bit signal WGC with BT.2020 color representation
  • Display Reproduction: More than 90% of P3 colors
  • High Dynamic Range and SMPTE ST2084 EOTF

The receiver should have a combination of peak brightness and black level either:

  • More than 1000 nits peak brightness and less than 0.05 nits black level


  • More than 540 nits peak brightness and less than 0.0005 nits black level

This choice allows for LCD or OLED displays.

SMPTE standard ST 2084 defines an HDR EOTF for mastering reference displays.

CES saw the release of many HDR receivers, with Dolby Vision, and with Ultra HD Premium logo. Much of the impetus for HDR has come from Dolby. Since their acquisition in 2007 of BrightSide Technologies they have been developing HDR imaging.

To cut through all these standards, The Broadcast Bridge asked the experts at Dolby. Patrick Griffis, vice president, technology at Dolby Laboratories responded.

What is the relationship between Dolby Vision, SMPTE ST 2084 and CEA HDR10?

Griffis: To understand all of the proprietary/and open technologies concerning HDR we must first understand SMPTE ST-2084 (often referred to as PQ, or Perceptual Quantisation). It is the new electro-optical transfer function standard based on the human visual model, both Dolby Vision and HDR10 rely on the ST-2084 (PQ) standard to map digital code values to output light according to the human visual system model. However, to understand the difference in the technologies we must look at how the standard (ST-2084) was designed and why:

  1. Its objective is to broadly set a practical limits based on consumer preferences for colour volume (colour volume is term for all available colours at all allowable intensities) for entertainment purposes similar to what was done for perceivable colours in CIE 1931. This practical limit number was agreed by all the Hollywood studios/MovieLabs to be 10,000 nits at D6500 white. This number provides headroom for future developments and increases the intensity for ALL colours (not just white) in the palette of primaries chosen (e.g. P3, Rec 2020, XYZ, etc.).
  2. It was to define the level of precision needed to avoid any perceivable artifacts. This is where Barten’s work, which is the internationally recognised reference for the contrast sensitivity function of the human visual system, comes into play.
  3. Define a simple, closed form equation which models this response for the entertainment content use case. The resulting curve goes from absolute zero black up to 10,000 nits and is standardized as SMPTE ST-2084.
  4. To avoid any quantisation errors (i.e. banding etc.) at any particular luminance level, the ideal goal is to have a code value step size that is below that Barten threshold of visibility level, but, of course, the curve can be quantised at any precision desired. In studies performed at Dolby with studio participation, we found that to avoid any possibility of visible quantisation errors for noise-free content (e.g. CGI) the precision required was 12 bits. For natural content ( e.g. camera acquired) 10 bit precision was acceptable due to camera noise resulting in least significant bit dithering. So as the bit depth is reduced going from 0 to 10,000 nits, the step size is correspondingly larger and at some point, a quantisation artifact could occur due to poor precision.
  5. At 12 bits precision, each step in code value, whether R, G, or B individually or all three together is below the threshold of visibility ANY place on the PQ curve for any content type. Thus content which is mastered in PQ at 12 bits but only goes to 1000 nits peak white fits nicely inside the 10,000 nit container. Since each step is already below the threshold of visibility, there is no need to use “all” the code values (as was the common practice with analog and 8 bit digital video where the SNR was generally insufficient; and not perceptually limited) and the content is thus future-proof. Note since PQ is modelled after the human visual system which has an absolute sensitivity function, so does PQ. This is a fundamental difference and advantage of PQ going forward because it is modelled after the human visual system which is the real design target for next generation entertainment imagery.

As I have previous mentioned, both Dolby Vision and HDR10 rely on the ST-2084 (PQ) standard to map digital code values to output light according to the human visual system model. However, Dolby Vision is a superset of HDR10 in that it has all the features of HDR10 but provides full 12-bit fidelity whereas HDR10 is only 10 bits. In addition, Dolby Vision provides scene-based dynamic metadata to help improve the quality of content displayed in a consumer receiver on a scene by scene basis. HDR10 does not have this capability.

Further to this, the Dolby Vision VS10 playback solution can receive any HDR format that’s based on SMPTE-2084 and 2086 to ensure the best HDR experience is delivered with Dolby Vision while still allowing for TV OEM customisation. Devices based on Dolby Vision VS10 will play back a variety of HDR content types, such as Dolby Vision single-layer and dual-layer streams, HDR10 from UHD Blu-ray, and UHDA certified HDR content. A Dolby VS10-enabled product will playback all these content types but an HDR10-only TV will not.

I understand that the Dolby PQ EOTF goes all the way up to 10,000 nits, yet few display are brighter than 1,000. What happens to the peak white in such a display, would it be soft clipped in the driver?

Griffis: To answer this question, let us provide some context. A few current consumer displays will be capable of 1000 nits in the near future so a signal design limited to 1000 nits only puts a damper on innovation of display technology to be stuck at 1000 nits for a while. The idea that the format be limited to existing display technology has been the cause of the innovation lag on display technologies until now. Historically, TV’s have always been limited to Rec.709 and around 100 to 500 nits because all the source content was limited to that as well, i.e. graded at Rec.709 colour and 100 nits peak white. TV’s that were made with a wider colour palette and higher dynamic range would not work well because of the pixel dynamic range “stretching” that would needed to take advantage of the greater display capability added cost while not necessarily providing quality. Many TV’s that attempted these improvements failed to get traction in the market. It was a chicken and egg scenario in that without better content, there was no need for better TV’s and without better TV’s, there was no incentive to make better content.

Dolby Vision was designed to break that conundrum by providing a content mastering approach that future proofs the content creation process and works with today’s TV’s to make the best picture possible while providing headroom for tomorrow’s TV’s to work even better. By raising the bar on the content creation side, we allow innovation and technology to keep advancing on the consumer side to improve the consumer viewing experience while preserving creative intent across a variety of devices.

With this concept in mind, an HDR reference mastering monitor is used to make the colour and brightness decisions, taking full advantage of the dynamic range of the display. Our basic philosophy is to master in the largest “colour volume” available. A “colour volume” is shorthand to describe the palette of available colours at all allowable intensities noting that white is always the brightest since it is the sum of the other colours. The larger the colour volume, the better the image. Today there are professional reference displays with a colour volume peak white of up to 4,000 cd/m2 – or nits – but improvements are occurring rapidly and the aspirational goal is to ultimately achieve the full 10,000 nits capability in SMPTE ST-2084 for mastered content. Once the reference grade is done, the Dolby Vision capable colour-grading system will analyse and save the metadata that describes the creative decisions made on the mastering display. The metadata generated can be used to dynamically map the HDR master to the target colour volume of the consumer playback device, which may vary in performance range. During playback, the Dolby Vision metadata is used to optimise the experience for the target display device i.e. a Dolby Vision enabled display device knows the maximum and minimum brightness, colour gamut and other characteristics. The content is not “clipped” going to the smaller display colour volume but mapped pixel by pixel to make the best picture possible based on the device capabilities. Using the metadata, a 600 nit TV will look great, a 1200 nit TV will look even better – all referencing the same metadata and HDR source content. So as the content gets better and better so can the TV pictures.

Let us know what you think…

Log-in or Register for free to post comments…

You might also like...

Special Report: Super Bowl LIII – The Technology Behind the Broadcast

New England Patriot quarterback, Tom Brady, entered Mercedes Benz stadium in Atlanta, GA on February 3rd having already won five Super Bowl games. And through four-quarters of play, all delivered by a television crew of hundreds of technicians, sports casters…

eBook:  Preparing for Broadcast IP Infrastructures

This FREE to download eBook is likely to become the reference document you keep close at hand, because, if, like many, you are tasked with Preparing for Broadcast IP Infrastructures. Supported by Riedel, this near 100 pages of in-depth guides, illustrations,…

Weather Channel Uses Immersive Mixed Reality to Educate Viewers About the Dangers From Winter Ice

In their latest hyper-realistic VR weather warning, The Weather Channel helps viewers better understand the potential dangers created by ice storms.

IP Control Uncovered

As broadcasters continue to successfully migrate video and audio to IP, attention soon turns to control, interoperability, and interconnectivity to improve reliability and efficiency. In this article, we investigate IP control as we move to IP infrastructures.

Streaming Competes with Broadcast at Super Bowl LIII

Thanks to improved streaming technology, a lot more fans are going to be watching the Super Bowl on mobile screens.