HDR Primer: Part 2 - Advanced HDR Details

HDR is a technology that is evolving quickly on the Professional and Consumer side. Like all new technologies, the devil is in the details. There is confusion about the technical aspects of which HDR technique and implementation are best for a given situation.

Today’s camera reference for the creation of content is ITU R.BT.709. The camera gamma curve is the inverse of the BT.1886 display standard. CRT characteristics were standardized in the 1930s. Remember, the original TV display was a CRT picture tube that had unique, non-linear characteristics. We compensated for the CRT by creating an inverse gamma curve for cameras. The display’s gamma curve established the camera gamma curve, not the other way around. The goal of BT.1886 was to create a standard that effectively emulated what the CRT did. BT.709 uses the same brightness established in 1932 more than 85 years ago and that’s what we’re still using today. Modern LCD and OLED displays are capable of reproducing images at a higher luminance, contrast and color range than the current BT.709 standard.

To discuss HDR, we need to understand some important new terminology. The term camera gamma curve has been replaced by a digital curve, now called a Transfer Function. There are three transfer functions: The Opto-Electrical (Camera) Transfer Function (OETF) is the digital representation of the gamma curve. The Electro-Optical (Display) Transfer Function (EOTF) is the digital display function of a monitor or TV. OOTF is the Opto-Optical Transfer Function of the complete, adjusted “Glass to Glass” Transfer Function.

Another HDR term to consider is the measurement of luminance or brightness of the display. The measurement is one candela per meter square (cd/m2). We call that measurement a “nit”. We have limited today’s typical picture brightness to about 100 nits at 100 IRE. 100 nits was the maximum brightness the CRT could deliver without distortion. HDR removes that limitation of brightness. SMPTE ST 2084, PQ and the consumer HDR-10 are all different names for the same thing. On an HDR TV, they can be 10 times brighter at 1,000 nits compared to the 100 nits that we use today. Dolby Vision takes that brightness to a theoretical limit of 10,000 nits. That extreme brightness is not possible today, although some reference monitors are capable of up to 4,000 nits peak brightness. There are some growing concerns about “visual loudness” at this extreme brightness level. 

Figure 1. Nits brightness scale from black to a maximum theoretical brightness.

Most people don’t realize that movie theaters are only 48-50 nits. That’s about half of your typical TV brightness. However, the theatre environment is dark, and the viewing conditions are controlled. Television viewing is mostly uncontrolled. Viewers watch on mobile devices and on varying quality TVs with different adjustments in high ambient lighting. Most importantly, changing programs and interstitials dramatically affect the viewing environment. In other words, cinema and TV are very different viewing environments.

With a greater Dynamic Range, we also need more bits to avoid displaying quantizing errors and banding. A minimum of 10-bit quantization is required for HDR. Dolby Vision uses 12 bits. Unfortunately, the 20-year-old ATSC broadcast television standard is only 8 bits @ 19.39 Mbps, MPEG-2. That means broadcast transmission of HDR is not possible today. ATSC 3.0 will allow 4K, HDR, HFR and WCG using H.265 HEVC/MPEG4.

Figure 2. HDR requires 10 bits to avoid banding in the picture.

Hitachi production cameras are 10-bit SDI video output. This is the standard in broadcast-quality cameras. 10-bit quantizing is part of the PQ/ HDR10 specification. Dolby Vision requires 12 bits @ 1080p. In addition to more quantizing bits, we need to properly allocate the bits and carefully align with the dynamic characteristics of the camera imagers. HDR is typically capable of at least 5 stops greater dynamic range than SDR. That’s 32 times greater dynamic range than SDR. Each manufacturer optimizes their imagers for HDR. In the Hitachi imagers, we allocate a greater number of bits in the range that the human eye is most sensitive to. Our eyes are most sensitive to mid-tones and shadows. As a result, we perceive shadows and dark objects as brighter than they actually are. This is probably due to evolution where humans needed to see a tiger hiding in the shadows.

HDR is not just a camera or a display curve. It’s a complete OOTF system. Every link in the chain must be capable of passing and processing the HDR signal in both bit depth and metadata. If any link in the chain fails, you won’t see HDR. Metadata is used for indicating HDR-10 and Dolby Vision processing. If the metadata is missing, there will be no HDR. HLG has no metadata.

Hybrid Log-Gamma (HLG) is standardized as ARIB STD-B67. HLG is intended for live video and is “display-referenced”, meaning the camera output is intended to display on traditional TV displays. The transfer characteristics specified by HLG is a hybrid of gamma curves intended for traditional displays. It was developed for live television by two major international broadcasters; BBC and NHK.

Dolby (PQ) is standardized as SMPTE ST 2084. The consumer TV manufacturers call this HDR10 and every HDR TV has it. PQ and Dolby Vision are the same profile curve, but Dolby Vision requires12 bits for a theoretical 10,000 nits brightness. The “artistic intent” of how the director wants you to see the picture is put into static or dynamic metadata. This HDR metadata controls the TV display to optimize the scene for the director’s vision of how it should look to you. PQ and Dolby Vision are more applicable to cinema and TV movies. The HDR Curve is “scene-referenced” like a linear or RAW picture. Color & tone grading in post-production are applied to PQ or Dolby Vision profiles. That’s because color & tone grading is more controllable than a pre-set gamma curve. When working with linear or raw video. the big advantage of using scene-referenced material is that we have recorded the scene as it actually is. Any grading we do will not have to deal with the distortions that a gamma curve adds. Grading corrections can behave in a more natural and realistic manner. The down side is that we need to use a lot more data to record the scene accurately. Viewing on a standard gamma TV or monitor is going to look very dark because TVs are display-referenced. It’s important to note that HDR is most visible with cinematic or dramatic lighting. TV game shows and news sets that are traditionally flat lit will look about the same in HDR or SDR.

In the next installment, Part 3 will discuss how to implement and use HDR.

Part of a series supported by

You might also like...