HDR For Cinematography - Part 2
In this second installment of our extended article looking into HDR for cinematography we look at the practical aspects and applications of HDR.
Greater Flexibility
Companies such as Netflix are placing great demands on cinematographers for the productions they supply. Dolby Vision is the norm at 4KP60 with full resolution 4:4:4 color subsampling at 12-bits.
Even with these formats, the resolution of the cameras still exceeds that of the playout and broadcast system. Furthermore, cameras are going to improve at a greater rate than our ability to change the broadcast formats thus demanding greater flexibility in the system.
Camera manufacturers have provided their own solutions to acquisition by using versions of logarithmic transfer functions to map the 14-bit video from the camera sensor to something more manageable such as 10- or 12-bit 4:4:4. Transfer functions such as S-Log from Sony, LogC from Arri, Canon-Log, and Blackmagic Log, all contribute to helping squeeze as much information into the 10- or 12- bit distribution and recording system as possible to maintain compatibility with existing infrastructures.
It’s also worth remembering that the formats HLG and PQ are completely different than one of the log format’s leaving the camera. For example, a live broadcaster might use Hybrid Log Gamma for their production, so they would only use the HLG output. If recording a movie production, then one of the camera log formats might be used and then processed in post.
Recording Log Formats
Even though PQ is used as a delivery and transmission format for the broadcaster, the cinematographer will still need to record in a log format for later post processing.
This also adds another interesting challenge, the cinematographer had much greater dynamic range and latitude when they were shooting using one of the 14-bit cameras for SDR productions. The camera-log recordings allowed the color-grader to effectively lift detail from the shadows and highlights when color correcting as there is much more information in the HDR image than can be seen by the naked eye allowing easier conversion to SDR. As the cinematographer is now thinking in HDR terms, the latitude for error has been much reduced so they must focus more on making sure the images are correct during acquisition.
There was a time when the cinematographer would have known they could have fixed a problem in post as there was a much higher margin for error when shooting for SDR. However, as we move to HDR productions, this margin of error has almost been completely removed. There is still some latitude as the image is providing 14-bit data and a company such as Netflix requires 12-bits, but there’s not much in it.
Viewing Conditions
HLG and PQ are the two distribution formats that are playing out in the HDR arena. Although both have their good points, HLG is proving the most applicable to live productions. The system is scene referred so the broadcasters cannot make any assumptions about the viewers home television or mobile device.
Consequently, the signal-to-light relationship must be maintained. HLG is often graded and shaded for 1000cd/m2 but a limit is none the less imposed.
The ICtCp color method is still a color difference system similar to YCbCr, however, it takes advantage of some of the adaptive aspects of the human visual system to provide color subsampling that exhibits fewer artifacts when processing in post-production.
It’s worth remembering that the potential brightness of a home television, often expressed in NITs, is not intended to be the brightness of the whole screen. If we were to sit close to a 1,500-NIT television with a peak white signal displayed, then the viewer would certainly find the experience uncomfortable. Instead, the maximum brightness level of a television or monitor refers to the brightness of specular highlights and peak transients.
This leads onto some interesting situations for cinematographers. HLG works well for live events as it is scene referred and there is still a direct relationship between the light level of the scene and the HDR signal level. PQ is display referred and allows the cinematographer to make some fundamental assumptions about the viewing environment.
Although PQ can work in the live environment, it certainly excels in the making of high-end movies. Metadata established during the grading and editing process provides information about how the image should be displayed. The viewers television or mobile device then uses this information to calibrate the screen so that it displays the cinematographer’s images as intended, often referred to as “artistic intent”.
Color Subsampling Opportunities
PQ even facilitates a different method of providing color subsampling that helps maintain the image quality during post-production. Although ICtCp can be used with HLG, the need to make HLG compatible with existing live infrastructures, greatly restricts its use. Cinematography doesn’t suffer this restriction. After the days shoot, the rushes are taken to the post house for grading and later editing, generally using software-based systems that are not real-time critical.
IC tCp is similar to YCrCb in that it is a color difference system. “I” is the intensity luma component, Ct is blue- yellow tritanopia color component, and Cp is the red-green protanopia color component. It differs from YCrCb as it improves color subsampling and hue linearity. The key with ICtCp is that it provides color uniformity by taking advantage of some of the aspects of the human visual system by optimizing lines of constant hue, uniformity of just-noticeable-difference, and constant illuminance. YCbCr introduces distortions into saturated colors when subsampled due to the nonconstant attributes of the luminance. This does not occur in ICtCp due to the nearly constant illuminance representation.
Mimicking the human eye, IC tCp has three distinct operations; the incoming light is captured by three types of cones that have peak sensitivity for the long (L), medium (M), and short (S) wavelengths. This captured linear light is converted into a non-linear signal to simulate the adaptive cone response of the HVS. And these non-linear signals are processed by a color differencing system in three different pathways to simulate the light-dark (intensity), yellow-blue (tritan isoluminant), and the red-green (protan isoluminant).
The major benefit for using IC tCp is found in post-production where multiple image processing is performed. Methods of converting RGB from YCbCr demonstrate significant artifacts and these are greatly reduced with the ICtCp conversions.
As cameras and monitors improve, any of the non-linearity’s seen in YC rCb are quickly seen, but processing IC tCp mitigates this.
No Longer Shackled to YCbCr
The ICtCp method can be used quite happily by the cinematographer if they decide to do so. They are not shackled by the same time constraints as the broadcasters.
Cinematographers also need new methods of monitoring. For the first time in nearly fifty years, we have made a significant change to the color space. Rec.2020 has a much greater vibrancy than Rec.709, especially in the greens and reds. Consequently, anybody working in television must now think more carefully about color space, especially with out of gamut errors.
On screen displays showing potential errors with color gamut are ideal and are much more descriptive and easier to use, especially in the field. The false color mode is a method of highlighting areas of the picture where the colors exceed the color gamut.
Linear Displays
The key luminance percentages used in HDR are 90% reflectance and 18% grey. Displays that can reverse the OETF of the camera, that is the log transfer function used, allow the cinematographer to continuously view the linear image from the camera without having to be concerned with the transfer function characteristics of the camera.
Look up tables (LUTs) are a convenient method of transferring from the log image to linear display and further facilitate how the data is presented to the cinematographer. Consequently, the luminance can be displayed in either NITs or f-stops.
The advent of HDR and WCG is not only providing broadcasters with new and improved images to help enhance the viewers immersive experience, but also provides new opportunities for cinematographers to deliver higher quality images than would have been traditionally possible in live television.
Cinematographers are able to use new features within HDR and WCG that are not applicable to broadcasters as there is no great need to maintain compatibility with such systems, opening up a whole new plethora of opportunities.
Supported by
Broadcast Bridge Survey
You might also like...
Brazil Adopts ATSC 3.0 For NextGen TV Physical Layer
The decision by Brazil’s SBTVD Forum to recommend ATSC 3.0 as the physical layer of its TV 3.0 standard after field testing is a particular blow to Japan’s ISDB-T, because that was the incumbent digital terrestrial platform in the country. C…
Broadcasting Innovations At Paris 2024 Olympic Games
France Télévisions was the standout video service performer at the 2024 Paris Summer Olympics, with a collection of technical deployments that secured the EBU’s Excellence in Media Award for innovations enabled by application of cloud-based IP production.
Standards: Part 18 - High Efficiency And Other Advanced Audio Codecs
Our series on Standards moves on to discussion of advancements in AAC coding, alternative coders for special case scenarios, and their management within a consistent framework.
HDR & WCG For Broadcast - Expanding Acquisition Capabilities With HDR & WCG
HDR & WCG do present new requirements for vision engineers, but the fundamental principles described here remain familiar and easily manageable.
What Does Hybrid Really Mean?
In this article we discuss the philosophy of hybrid systems, where assets, software and compute resource are located across on-prem, cloud and hybrid infrastructure.