The techniques of 35mm film are seductively simple. The process is the same no matter what the camera in use, or how the film will be cut. Conversely, every digital camera might have its own ways of approaching different parts of the process, creating a forest of terminology – gamma, gamut, subsampling – that’s easily mistaken. Let’s follow a picture from the sensor to the recorded file and figure out exactly what all this means.
The broadcast industry is mired in a state of resolution confusion. HD is the format du jour, 4K UHD is emerging quickly and proponents of 8K refuse to stay quiet. For a broadcast engineer, it’s enough to make your head spin.
As High Dynamic Range (HDR) and Wide Color Gamut (i.e.BT.2020) are increasingly mandated by major industry players like Netflix and Amazon, DOPs in the broadcast realm are under intense pressure to get it right during original image capture. We all know (or learned the hard way) that the amount of detail required to produce an optimal HDR master cannot be recreated or effectively added downstream.
Most people are aware that any color can be mixed from red, green and blue light, and we make color pictures out of red, green and blue images. The relationship between modern color imaging and the human visual system was recently discussed by John Watkinson in his series on color. In this piece, we’re going to look at something that comes up often in modern film and TV technique: color gamuts. It’s a term that suffers a lot of misuse, but the basics are simple: a color image uses red, green and blue, and the gamut describes which red, which green, and which blue we’re using.
Almost since photography has existed, people have pursued ways of modifying the picture after it’s been shot. The “dodge” and “burn” tools in Photoshop are widely understood as ways to make things brighter or darker, but it’s probably less widely understood that they refer to techniques for exposure control that date all the way back to the earliest days of darkroom image processing. Bring moving images into the mix and consistency becomes a big concern too. Individual still photographs might be part of a single exhibition, but they don’t have any concept of being cut together in a sequence.
Dealing with brightness in camera systems sounds simple. Increase the light going into the lens; increase the signal level coming out of the camera, and in turn increase the amount of light coming out of the display. In reality, it’s always been more complicated than that. Camera, display and postproduction technologies have been chasing each other for most of the last century, especially since a period in the late 1990s or early 2000s, when electronic cameras started to become good enough for serious single-camera drama work.