Sensors and Lenses - Part 2

Last time, we talked about the history that created modern digital cinema technology, and particularly the factors which lead to the modern push for ever larger sensors. It’s been going on in some form for twenty years, to the point that we’re now asking for bigger imagers than cinema has ever commonly used, achieving more resolution than cinema commonly achieved, with greater sensitivity than was ever available to directors of photography in the twentieth century. To get that we’re tolerating all kinds of inconveniences in terms of the lenses we must use and the light levels, or sheer accomplishment in focus pulling, that big chips tend to demand.

Why are we doing this? The trivial justification is the – to spin it positively – greater selectivity in focus, but there has to be more to it than that.

Getting deeper into this requires that we know what we’re doing with all that extra silicon, and why. The fundamental compromise is between resolution and a combination of sensitivity, dynamic range, noise performance and color accuracy. There was a time when electronic imaging was engaged in a perpetual tail chase with film, but as we’ve seen, that time is long over. When Sony started using the slogan “beyond definition,” it was already starting to become clear, in the collective consciousness, that the resolution race was nearing a natural conclusion. The move toward HDR is often taken to be mainly about brightness, but it’s also closely associated with better color performance, and for reasons we’ll discuss later, bigger sensitive areas actually serve both those goals.

Photosites and Sensitivity

Sensor designers have effectively three ways to use a bigger sensor: put more photosites on the sensor, make the photosites bigger, or some combination of the two. A larger photosite can hold more electrons, something the industry calls “full well capacity,” meaning the camera can detect a lot more photons before it hits peak white. That improves dynamic range, something that’s widely associated with images which are “cinematic,” for some value of that rather fuzzy term. The larger photosite is also simply more likely to have a photon land on it, so it may seem more sensitive. At the same time, using the extra space for more pixels gives us higher resolution without otherwise sacrificing performance; some manufacturers have used fundamentally the same photosite design on a larger sensor to achieve higher resolution without sacrificing color performance.

So yes, it’s no great revelation that a larger sensor can genuinely have better performance than a smaller one, all else being equal. In reality, all else isn’t always equal, and new work in sensor design usually involves finding ways to improve one of the key figures of merit – sensitivity, dynamic range – without relying on a bigger photosite. One good example is the issue of fill factor, which refers to the amount of the sensor that’s actually sensitive to light. On modern sensors, each photosite tends to be associated with some electronics which take up area that could otherwise be receiving light. Developments since the CCD age have made it ever more possible to stack electronics behind the photosite so that more of the front face of the sensor can actually detect light.

Noise vs Resolution

For any given sensor, though, assuming the underlying technology is fixed, a designer can choose more photosites, or bigger photosites. If that seems like laboring the obvious, there’s something slightly subtle to realize here. It’s difficult to easily express how much sheer image quality something has; we normally talk about resolution or noise independently. Still, if we consider that a sensor with bigger photosites has less noise but lower resolution, and one with smaller photosites has higher resolution but more noise, we might start to think that the absolute amount of sheer picture quality available from a given area of silicon, using a given level of technology, is fairly fixed. That’s especially true if we take into account the fact that modern sensors are often very high resolution, higher than the finished production in many cases. Scaling a frame down averages pixels together, reducing both noise levels and resolution, so we can even choose to modify what sort of compromise we want in post-production.

In the end, though, it is something of a zero-sum game.

Diagram 1 – Bayer proposes a CMY image sensor pattern as well as his famous RGB pattern.

Diagram 1 – Bayer proposes a CMY image sensor pattern as well as his famous RGB pattern.

All that said, there is one factor that we haven’t considered so far: color. Most people are aware that silicon sensors naturally see only brightness, which is why we call each individual sensitive area a photosite, not a pixel. A pixel in a color image needs at least three components; each photosite gives us only one. By default, most silicon sensors are most sensitive to green-yellow colors, so when we filter the photosites for color according to Bryce Bayer’s design, the green channel is generally the most sensitive, the red second, and the blue by far the last. That’s why, if we look at individual color channels from modern cameras, the blue channel is generally noisier than the others. That happens because the blue channel demands the most amplification to achieve the same signal level that we get from red and green, to achieve proper color balance.

Saturated Colors Reduce Sensitivity

This can create something of a perverse incentive in terms of the red, green and blue colors that are actually chosen for a sensor. Instinctively, we might reach for deep, saturated primary colors, with the idea that they would allow the camera to accurately differentiate the colors it sees. That works, but it creates a compromise: deep, saturated filters absorb a lot of light. That naturally reduces sensitivity, so sensor designers have at least some incentive to use paler, less saturated filters with the aim of creating a more light-sensitive camera. Crucially, sensitivity is a reasonably easily-measured, saleable specification, while colorimetry is often not.

The pictures from a sensor with less saturated primaries will, by default, be less saturated, something that’s generally addressed in the in-camera image processing. Once it was called “matrixing,” but in general it boils down to turning up the saturation. That’s not a precise art, though, and when modern cameras are criticized for having poor colorimetry or outright distorting colors, overzealous matrixing is often one cause. Typical problems may include rendering very deep blues – as created by blue LEDs – as purple or turning all greenish-blue hues into the same shade of turquoise, or all reddish-yellows into the same orange. It’s generally subtle, in real-world cameras, though this may explain why some of the most respected designs are liked because they reproduce colors well but often don’t have massively high sensitivity.

Foveon Stacking

There are other ways to create color cameras. The Foveon design (purchased outright by Sigma in 2008) stacks photosites in sets of three, relying on the deeper penetration of redder light through the silicon to separate colors. Again, there are issues with separating colors that require a lot of postprocessing, but it’s an interesting option. The other way to create color from monochrome sensors is the three-chip block of a broadcast camera, where the incoming light is split into red, green and blue images then lands on three physically separate sensors. The results are good, but it complicates lens design and the approach went out of favor for single-camera drama filmmaking with the Thomson Viper and Sony F23 cameras, back in the 2000s. It is still used in up-to-the-minute cameras like Sony’s UHC-8300 8K broadcast camera. Packing enough pixels for 8K resolution on a single chip small enough to work with a broadcast lens might beg a lot of noise, so it’s no surprise that the designers reached for the classic color splitter block design (and still includes optical components to enlarge the image so that standard lenses can cover its larger-than-average sensors.)

Variations on single-chip color patterns exist – Bayer himself suggested that the cyan, magenta and yellow secondary colors could be used, which reduces absorption in the filters at the expense of demanding more postprocessing. That’s interesting because, to return to the perennial comparison with film that we discussed last time, there is a level on which photochemical film is a cyan, magenta and yellow format too, because those are the colors that the dyes in color film absorb. The range of colors film is capable of reproducing is affected by a lot of complex factors, but compared to even quite basic modern cameras, film, used in traditional cinema release, is soft, noisy, and insensitive. It’s possible to sacrifice quite a lot of sensitivity performance for color performance before a modern digital camera becomes as slow as film.

That sort of compromise, which affects how we might actually use all this technology, will be our subject next time, when we’ll talk about all those situations which bring out a sheen of sweat on the forehead of the average focus puller.

You might also like...

NAB Show 2024 BEIT Sessions Part 2: New Broadcast Technologies

The most tightly focused and fresh technical information for TV engineers at the NAB Show will be analyzed, discussed, and explained during the four days of BEIT sessions. It’s the best opportunity on Earth to learn from and question i…

Chris Brown Discusses The Themes Of The 2024 NAB Show

The Broadcast Bridge sat down with Chris Brown, executive vice president and managing director, NAB Global Connections and Events to discuss this year’s gathering April 13-17 (show floor open April 14-17) and how the industry looks to the show e…

Audio For Broadcast - The Book

​Audio For Broadcast - The Book gathers together 16 articles into a 78 page eBook which explores the science and practical applications of audio in broadcast.  This book is not aimed at audio A1’s, it is intended as a reference resource for …

Standards: Part 4 - Standards For Media Container Files

This article describes the various codecs in common use and their symbiotic relationship to the media container files which are essential when it comes to packaging the resulting content for storage or delivery.

Standards: Appendix E - File Extensions Vs. Container Formats

This list of file container formats and their extensions is not exhaustive but it does describe the important ones whose standards are in everyday use in a broadcasting environment.