HDR: Part 13 - Cameras And The Zero-Sum Game

For a long time, selecting camera gear has been fairly easy. For twenty years, digital cinema cameras have never quite had everything we wanted, and the choice often boiled down to comparing the compromises. That’ll always be true to a degree, but for the last year or two it’s felt like we’re arriving somewhere. We can’t have anything, but we can have more than enough, and those compromises are boiling down to a zero-sum game.

What does that mean? Well, make a bigger-chip camera for lower noise and we end up using longer lenses to achieve the same field of view. Longer lenses magnify things more, so out of focus areas look more out of focus, which is where we get the idea that longer lenses reduce depth of field. So, we might have to stop down to get to the same depth of field, to make accurate focus achievable. Stop down, though, and we’ve darkened the picture, so we might have to select a higher sensitivity. Higher sensitivity on a digital camera just means gain, which increases noise, compromising exactly the things we were trying to achieve with a bigger sensor to begin with.

There are other solutions. We can simply add more light, though that’s an expensive approach that might be a big ask of a production that’s already paying for a big-chip camera. And it’s not even particularly effective; double the amount of light and you’ve bought the focus puller a stop, which is welcome, but not enough to offset the difference in depth of field between a super-35 and full-frame sensor in otherwise equivalent circumstances. Quadruple the amount of light and – well – that’s two stops. That’s great, but it’s well beyond the ability of most productions to do.

Optical Quality

The vicious circle continues in other ways. Instead of improving noise, we might want a bigger sensor to improve resolution without having to accept a noise penalty. The problem is, to achieve, say, f/2, a lens must (at the very least) have a diameter that’s equal to half its focal length. But the focal length is longer, for an equivalent field of view, on our larger sensor. The lens must therefore be larger – which is intuitive enough – to achieve the same f stop at the same field of view and with the same quality. That bigger lens will be considerably more expensive. If it isn’t sufficiently expensive, the optical quality might begin to suffer, compromising the improved resolution we wanted to begin with.

That’s a zero-sum game. A combination of practicalities that leave us with a certain maximum level of image quality. No matter which compromises we choose, we’re trading one thing off against another.

Larger Sensors

Going any further down the road of bigger sensors probably isn’t an idea with much of a future. More light and better glass helps, but costs can only escalate so far. There certainly doesn’t seem to have been too much of a push for even larger sensors in mainstream cinematography. There have been digital medium-format camera backs – generally not covering the full medium-format frame, but still very large – which will shoot video, and perhaps it’s only a matter of time before Imax steps in with a sensor the size of a 15-perf 65mm negative. The purpose of Imax is not really restraint or moderation, after all. Still, yet bigger chips sort of thing seems likely to stay in its specialist niche.

Is there a better solution? Sure, though it applies to every imaging sensor ever made. All you have to do is find a way to make each square micron of sensor area more sensitive to light, without compromising anything else.

Improving Sensitivity

That is what sensor manufacturers R&D departments spend their days trying to do. One key figure of merit is “quantum efficiency.” Ideally, a photosite on a sensor would capture every photon that struck it and convert that photon into an electron. Real world designs are not quite that perfect. Equally, modern sensors have built-in hardware which converts the numbers of electrons captured, which is fundamentally an analog signal, into a digital signal. Doing that creates noise we’d prefer wasn’t there. The reality is that a competitive sensor in 2020 can record light levels up to a few tens of thousands of photons per frame, per photosite, with a handful of electrons in read noise.

Naturally, we’d like more, which is where scale helps. The simplest way to improve things is to make the photosite bigger so more photons are likely to hit it and more will fit in it, which demands a bigger sensor for the same resolution. We’ve done that, though. Now we have to find a way to achieve higher sensitivity without just scaling things up; to break that zero-sum game.

Various approaches to doing that have been tried. One is to maximize the amount of the sensor that’s covered in photosites, minimizing the extra circuitry that’s around each one. This is why many sensors have a rolling shutter, as the extra electronics for global shuttering take up more space. We can also make more room for photosite area by separating the photosites from the electronics and stacking them in layers. People have even put tiny arrays of lenses on the front of sensors to focus light on the active areas, though that can cause problems with lenses that fire light at the sensor at anything other than a right angle. By far the most popular approach, for all sorts of reasons, is to reduce the saturation of the filters which allow the sensor to see in color, compromising color performance for sensitivity.

Zero Sum Choice

The best of those ideas are the fundamental advances, the advanced development, and they can give us real advancement. They come very slowly, though. Yes, we can pay more money for more performance, tolerating a lower yield and more rejects in sensor manufacturing for a design that really pushes the envelope, but as with anything, the last 5% of the performance costs the last 50% of the money. Real progress in sensor design comes much less frequently than the camera market needs it to, so we trade off size, resolution, color performance – and when we’re selecting a camera for a job, we dance around the zero-sum game.

You might also like...

Monitoring & Compliance In Broadcast: Monitoring Video & Audio In Capture & Production

The ability to monitor Video and Audio during capture and production is becoming increasingly important, driven by the need to output to many widely different services, and doing it very quickly.

Live Sports Production: Part 3 – Evolving OB Infrastructure

Welcome to Part 3 of ‘Live Sports Production’ - This multi-part series uses a round table style format to explore the technology of live sports production with some of the industry’s leading broadcast engineers. It is a fascinating insight into w…

Monitoring & Compliance In Broadcast: Part 3 - Production Systems

‘Monitoring & Compliance In Broadcast’ explores how exemplary content production and delivery standards are maintained and legal obligations are met. The series includes four Themed Content Collections, each of which tackles a different area of the media supply chain. Part 3 con…

Growing Momentum For 5G In Remote Production

A combination of factors that includes new 3GPP 5G standards & optimizations that have reduced latencies & jitter, new network slicing capabilities and the availability of new LEO satellite services are bringing increasing momentum to the use of 5G for…

Monitoring & Compliance In Broadcast: Accessibility & The Impact Of AI

The proliferation of delivery devices and formats increases the challenges presented by accessibility compliance, but it is an area of rapid AI powered innovation.