For a long time, selecting camera gear has been fairly easy. For twenty years, digital cinema cameras have never quite had everything we wanted, and the choice often boiled down to comparing the compromises. That’ll always be true to a degree, but for the last year or two it’s felt like we’re arriving somewhere. We can’t have anything, but we can have more than enough, and those compromises are boiling down to a zero-sum game.
What does that mean? Well, make a bigger-chip camera for lower noise and we end up using longer lenses to achieve the same field of view. Longer lenses magnify things more, so out of focus areas look more out of focus, which is where we get the idea that longer lenses reduce depth of field. So, we might have to stop down to get to the same depth of field, to make accurate focus achievable. Stop down, though, and we’ve darkened the picture, so we might have to select a higher sensitivity. Higher sensitivity on a digital camera just means gain, which increases noise, compromising exactly the things we were trying to achieve with a bigger sensor to begin with.
There are other solutions. We can simply add more light, though that’s an expensive approach that might be a big ask of a production that’s already paying for a big-chip camera. And it’s not even particularly effective; double the amount of light and you’ve bought the focus puller a stop, which is welcome, but not enough to offset the difference in depth of field between a super-35 and full-frame sensor in otherwise equivalent circumstances. Quadruple the amount of light and – well – that’s two stops. That’s great, but it’s well beyond the ability of most productions to do.
The vicious circle continues in other ways. Instead of improving noise, we might want a bigger sensor to improve resolution without having to accept a noise penalty. The problem is, to achieve, say, f/2, a lens must (at the very least) have a diameter that’s equal to half its focal length. But the focal length is longer, for an equivalent field of view, on our larger sensor. The lens must therefore be larger – which is intuitive enough – to achieve the same f stop at the same field of view and with the same quality. That bigger lens will be considerably more expensive. If it isn’t sufficiently expensive, the optical quality might begin to suffer, compromising the improved resolution we wanted to begin with.
That’s a zero-sum game. A combination of practicalities that leave us with a certain maximum level of image quality. No matter which compromises we choose, we’re trading one thing off against another.
Going any further down the road of bigger sensors probably isn’t an idea with much of a future. More light and better glass helps, but costs can only escalate so far. There certainly doesn’t seem to have been too much of a push for even larger sensors in mainstream cinematography. There have been digital medium-format camera backs – generally not covering the full medium-format frame, but still very large – which will shoot video, and perhaps it’s only a matter of time before Imax steps in with a sensor the size of a 15-perf 65mm negative. The purpose of Imax is not really restraint or moderation, after all. Still, yet bigger chips sort of thing seems likely to stay in its specialist niche.
Is there a better solution? Sure, though it applies to every imaging sensor ever made. All you have to do is find a way to make each square micron of sensor area more sensitive to light, without compromising anything else.
That is what sensor manufacturers R&D departments spend their days trying to do. One key figure of merit is “quantum efficiency.” Ideally, a photosite on a sensor would capture every photon that struck it and convert that photon into an electron. Real world designs are not quite that perfect. Equally, modern sensors have built-in hardware which converts the numbers of electrons captured, which is fundamentally an analog signal, into a digital signal. Doing that creates noise we’d prefer wasn’t there. The reality is that a competitive sensor in 2020 can record light levels up to a few tens of thousands of photons per frame, per photosite, with a handful of electrons in read noise.
Naturally, we’d like more, which is where scale helps. The simplest way to improve things is to make the photosite bigger so more photons are likely to hit it and more will fit in it, which demands a bigger sensor for the same resolution. We’ve done that, though. Now we have to find a way to achieve higher sensitivity without just scaling things up; to break that zero-sum game.
Various approaches to doing that have been tried. One is to maximize the amount of the sensor that’s covered in photosites, minimizing the extra circuitry that’s around each one. This is why many sensors have a rolling shutter, as the extra electronics for global shuttering take up more space. We can also make more room for photosite area by separating the photosites from the electronics and stacking them in layers. People have even put tiny arrays of lenses on the front of sensors to focus light on the active areas, though that can cause problems with lenses that fire light at the sensor at anything other than a right angle. By far the most popular approach, for all sorts of reasons, is to reduce the saturation of the filters which allow the sensor to see in color, compromising color performance for sensitivity.
Zero Sum Choice
The best of those ideas are the fundamental advances, the advanced development, and they can give us real advancement. They come very slowly, though. Yes, we can pay more money for more performance, tolerating a lower yield and more rejects in sensor manufacturing for a design that really pushes the envelope, but as with anything, the last 5% of the performance costs the last 50% of the money. Real progress in sensor design comes much less frequently than the camera market needs it to, so we trade off size, resolution, color performance – and when we’re selecting a camera for a job, we dance around the zero-sum game.
Why Did You Read This?
You might also like...
Now the CRT is history, we have to justify the retention of gamma on its performance as a perceptual compression codec. That requires its effect on human vision to be considered.
Director of photography John Christian Rosenlund has at least a three-decade history with director Bent Hamer. Their most recent collaboration, The Middle Man, depicts a town in the northern United States during a post-industrial depression. It’s perhaps not a s…
Before pandemics and the downsizing at traditional, broadcast news operations, many news and non-fiction DOPs were already assuming a significant role in post-production. Whereas frame rates, f-stops, and the character of our lenses, once formed the backbone of our expertise…
It’s perhaps a little unfair to blame modern visual effects people for the fact that audiences are becoming a little jaded about green screen. If we’re to conclude that there’s some sort of quality problem with VFX, we’d …
There are two components of gamma that have quite different purposes. One of them is always necessary because displays and their surroundings are never equally as bright as the original scene. The other one is a compression technique.