HDR: Part 6 - PQ And HLQ Cinematography
Like a lot of new ideas in film and TV, high dynamic range pictures are easy to like. The fear is that they’re far harder to create. In reality, HDR isn’t necessarily a huge burden, certainly not in the way that stereo 3D can be. It’s often been possible to take well-shot images which were produced with no thought of an HDR finish, and make them available in HDR via a fairly straightforward re-grading process. Most people are aware of the truism that proper exposure is the cinematographer’s first responsibility, and we should probably hold on to that thought.
This article was first published in 2020. It and the entire 'HDR Series' have been immensely popular, so we are re-publishing it for those who missed it first time around.
Even so, there are a few complicating factors. HDR can make previously-acceptable things look rough, such as bright skies that can become overpoweringly bright, especially if they appear behind dimmer foreground subjects. This sort of thing can be controlled to some extent in grading, with the usual concerns about shot-to-shot consistency and maintenance of everyone’s creative intent. One of the more disconcerting things about mastering HDR material, though, is that it breaks one of the most treasured promises of film and TV work. Until HDR, we’ve at least tried to ensure that the monitor in the grading suite will look like the picture on the viewer’s TV. In HDR, no longer.
Experienced people will already be chuckling at this, because the grading suite might be calibrated to perfection, but home TVs were often wildly inaccurate. With HDR, though, it’s known and accepted that not every display has the same capability. The various systems that exist to get HDR pictures from the camera to the home were designed in an attempt to ensure that the eventual image would look reasonable on displays that may differ widely. It’s not a complex concept. Less bright, less contrasty displays need to increase contrast and brightness in the image before displaying it. The details of doing that, though, of adjusting color and brightness, are subjective. That’s why we have colorists in the first place, so we need to be careful.
The bit of mathematics that describes how bright a display gets in response to a given signal level is called an electro-optical transfer function, or EOTF. In the conventional, standard dynamic range (SDR) world, this wasn’t very well codified until quite recently, when the ITU’s recommendation BT.1886 finally defined how bright an SDR display (strictly, a TFT-LCD SDR display) should be. There had been de-facto standards for years, though, and HDR doesn’t really bring any completely new concepts to the mix. SDR television has always had an EOTF; it’s just that HDR gives us some new ones that are set up to allow displays to push out enough light that the noonday sun becomes, if not realistically dazzling, at least brighter than it once was. What makes it complicated is the potential need to adjust images for known differences between the grading display and the viewer’s TV.
The two concepts most commonly discussed are called HLG, for hybrid log-gamma, and PQ, for perceptual quantizer. The ideas behind them are simpler than the jargon makes it sound, although they take very different approaches in the pursuit of increased contrast. The word “contrast” is carefully chosen here, because some of the most convincing HDR displays are actually dimmer than some far less good ones but achieve very low black levels. HDR displays must have both low black levels and, ideally, high peak brightness, though many consumer displays have either one or the other.
PQ is arguably the most carefully-optimized way to turn an electronic signal into light. It was designed such that the steps in brightness caused by a single digital signal level increase were (just about) below the threshold of visibility at all points from minimum black to maximum white. These decisions were made based on 12-bit digital signals, having a zero to 4096 range, and displays up to 10,000 candela per square meter (also called nits) in brightness.
10,000 nits is a very future-proof target in a world where and even the highest end options achieve maybe 4000, and current workflows often target lower levels as an ideal. Domestic displays are often below 1000. The crucial thing about PQ is that, because of the variability of HDR displays, PQ signals include data which describes the capability of the monitor on which it was graded and information about picture content. This information is used by other displays to produce the closest reasonable approximation of what the colorist saw, depending on the capability of the display in question. PQ will most often be encountered in the wild as part of the HDR10 or HDR10+ standards, or underlying Dolby Vision.
HLG takes a very different approach. Instead of ensuring every digital level increase represents a carefully-calculated increase in display brightness, HLG does not describe the intended monitor brightness. HLG describes scene brightness, give or take some modification by the colorist or a broadcast vision engineer. As such, HLG displays make a decision about how the scene should reasonably be rendered, depending on its capability; there is ideally no variability in the signal content based on the capability of the colorist’s display.
Figure 1 – SDR and HDR HLG are similar up to about 0.6 of the input luminance level, after which, the HLG curve continues to sympathetically compress the highlights. This helps maintain backwards compatibility with SDR.
The HLG relationship between brightness and signal level is arguably simpler, taking a conventional approach between black and 50% signal level, using a similar approach to SDR. Beyond 50% of signal level, it switches to a logarithmic (that is, low contrast) representation to represent highlights. In this system, the half-brightness changeover point is defined as reference white level, while all signals above that range represent the extra-high brightness parts of HDR pictures.
It’s often assumed that HLG is less capable than PQ. As a strict mathematical comparison that’s true; PQ is capable of representing around 28 stops of dynamic range, whereas HLG, using 10-bit signals, can encode about 16. Both are comfortably more than the human eye can handle, though, which is often quoted as being around 14 stops without taking into account the action of the iris. That comparison, however, overlooks those crucial differences about the intent of both systems and the way in which they’re designed. HLG devices could in theory expand beyond 10,000 cd/m2 if it became possible (which it might) or desirable (which is less certain.) It’d be up to the display to make sensible use of that ability based on the data it was given.
If this all seems like a rather academic differentiation, it might be in practice, although PQ is certainly more complicated in terms of the demands of producing and mastering new material. That’s especially so given the fact that most of the PQ material that’s actually created is done according to Dolby’s Vision standard and according to the needs of OTT distributors. The real difference in actual results is more to do with the attempt to make HLG at least partially backward-compatible; display an HLG signal on a conventional Rec. 709 display, and it will be a little desaturated, though certainly watchable. PQ signals are not watchable on SDR displays, and likely to require special handling at every stage of the production and post-production process.
But yes, crucially, neither system is even intended to guarantee identical results on every display, though a lot of weasel language is used, substituting “identical” for “acceptable” or “appropriate”. That is something we give up in pursuit of HDR, inasmuch as it was ever achievable.
In practice, HLG is largely seen as appropriate to broadcast TV, whereas PQ (or PQ-based systems) are being pushed heavily by OTT providers such as Netflix and Amazon who are extremely keen to produce a future-proof back catalog. In this context it’s perhaps slightly ironic that PQ is arguably less archival than HLG, inasmuch as HLG describes the scene and can take reasonable advantage of speculative, future display technologies. Conversely, PQ describes the intended appearance and all material finished in the format is therefore at least somewhat bound by the display technology of the time in which it was graded.
So, does any of this alter the concerns of camerawork, especially between PQ and HLG finishes? The practical answer is that the circumstances may be so different as to make the comparison difficult. People making exposure decisions for HLG material are likely to be broadcast vision engineers, whereas people shooting for PQ are likely to be directors of photography shooting single-camera drama (or commercials, or features.) Some productions do finish in more than one format, though; are there any special concerns that affect HLG over PQ, or vice versa? Cautiously, not really, at least not in camera. The most straightforward problems of HDR – mainly issues of distractingly-bright highlights in unanticipated areas of the frame – are fairly universal. Getting something to clip to white is harder than it was, but as long as we ensure our biggest lights are sufficiently big, these are solvable problems.
In the end, because of the variability of display technologies, all HDR is limited by the capabilities offered by home TV manufacturers. At the time of writing, mastering displays were invariably very different from those TVs. It’s hard to welcome an age in which it’s expected that home TVs will differ significantly from mastering displays, especially when TFT-LCD displays had closed that gap somewhat. Still, there seems no great sign of a format war given the way PQ and HLG are typically being deployed in different fields. It’s probably a good thing that modern technology has made it so easy to include advanced picture processing electronics in home audio-visual equipment. We’re going to need it.
You might also like...
The Resolution Revolution
We can now capture video in much higher resolutions than we can transmit, distribute and display. But should we?
Microphones: Part 3 - Human Auditory System
To get the best out of a microphone it is important to understand how it differs from the human ear.
HDR Picture Fundamentals: Camera Technology
Understanding the terminology and technical theory of camera sensors & lenses is a key element of specifying systems to meet the consumer desire for High Dynamic Range.
Demands On Production With HDR & WCG
The adoption of HDR requires adjustments in workflow that place different requirements on both people and technology, especially when multiple formats are required simultaneously.
NDI For Broadcast: Part 3 – Bridging The Gap
This third and for now, final part of our mini-series exploring NDI and its place in broadcast infrastructure moves on to a trio of tools released with NDI 5.0 which are all aimed at facilitating remote and collaborative workflows; NDI Audio,…