Sensors and Lenses - Part 3

There’s a terrible tendency in cinematography to concentrate too much on the technology, overlooking creative skills that often make a huge contribution. In the last two pieces of this series we’ve gone into some detail on the historical background to current camera technology. In this last piece on the art and science of sensors and lenses, we’re going to consider what difference all this makes in the real world.

The size of the imaging sensor and the choice we make affects the most fundamental things about photography. A lens, in the end, simply projects an image onto a surface. The size of that surface determines the amount of that projected image we see. So, despite common claims to the contrary, what a bigger sensor actually does, all else being strictly equal, is show us a wider field of view. In many discussions of the subject, though, it’s taken as read that the larger sensor gives us a shallower depth of field. That’s true if, and only if, we change lenses to achieve the same field of view. Achieving what we might consider the same frame between two cameras with different sensor sizes will require a longer lens on the larger sensor, and the reality is that a longer lens has a reduced depth of field.

There are several ways to think about this, but one interesting approach is to consider the real physical size of the sensor compared to the display we’re watching. If we’re viewing the results on the same 55-inch TV, the image from the smaller sensor is blown up more than the larger sensor, in terms of their sheer physical dimensions, in order to fill it. That enlarges details and makes out-of-focus details look more out of focus than they otherwise would. There are several ways to think about this, all equally valid, but in the end a larger sensor will, in practice, produce shallower depth of field for the same field of view.

Now we’ve changed the magnification of the lens to achieve the same frame, we’ve also changed a lot of other things. The magnification of a lens is, of course, proportional to its focal length. The f-number of a lens – its speed – is equal to the size of the entrance pupil divided by that focal length. In that sense, “entrance pupil” means the size of the aperture as viewed through the front elements, so that magnifying lenses in front of the aperture can make a lens faster by creating a bigger target for the light to hit. That works up to (in theory) the real physical diameter of the front of the lens, so that a 50mm f/2 lens must be at least 25mm, and in reality, considerably more than that, in diameter.

At the same time, we need a big enough image to cover the sensor, which also tends to make lenses larger; it’s no surprise that glass designed for larger formats tends to be physically larger and often very, very much more expensive.

Figure 1 – the top diagram shows the relative sensor positions for a lens focusing on a scene with a viewing angle of 35 degrees. From our thin lens approximation formula (1/f = 1/u + 1/v), we can see that if “u” stays the same, that is the distance from the scene to the lens, and “v” increases from v<sub>1</sub> to v<sub>2</sub> because a larger sensor is being used, then by definition, the focal length (f) must also increase. This is the reason why a lens with a longer focal length is required when a larger sensor is used, and the viewing angle is kept constant (see figure 2 for more details).

Figure 1 – the top diagram shows the relative sensor positions for a lens focusing on a scene with a viewing angle of 35 degrees. From our thin lens approximation formula (1/f = 1/u + 1/v), we can see that if “u” stays the same, that is the distance from the scene to the lens, and “v” increases from v1 to v2 because a larger sensor is being used, then by definition, the focal length (f) must also increase. This is the reason why a lens with a longer focal length is required when a larger sensor is used, and the viewing angle is kept constant (see figure 2 for more details).

It’s possible, though, and it’s at this point that unequivocal engineering realities begin to give way to subjective interpretation, because there’s no single amount of depth of field that is somehow correct. Invariably, when the subject of a shot is a human, the eyes are the target for focus, because humans habitually look one another in the eye. When the focus puller needs to ask which eye the director would like to be in focus, depth of field is probably too shallow. At the risk of offering an opinion, a shot which doesn’t clearly delineate the edge of the subject against the background is likely to lack depth and separation; we are, after all, talking about what’s invariably a two-dimensional artform.

There will always be exceptions, but the public demand is often for very fast lenses, and even lenses which cover very large sensors can be very fast. Arri’s Signature Primes open up to T1.8 up to 125mm, with 2.8 available on the 280mm. 280mm lenses create, of course, a rather wider field of view on a large sensor than a super-35mm sensor, and that sensor will be magnified less for display, as we discussed above, but it’s still a downright intimidating challenge for the focus puller. The option is nice, but it’s crucially important for everyone involved to understand that super-low f-numbers are, like super-large sensors, a possibility, not a target, and almost certainly not both at once.

How bad is it? Fine, we’ve matched field of view, and in doing so we’ve reduced depth of field on our larger sensor. If we want to maintain the same depth of field, assuming we’ve gone from a sensor roughly the size of super-35mm to a sensor roughly the size of a full-frame still photo negative, the difference in depth of field is equivalent to closing the lens down one or perhaps two stops, depending on the specifics. Instinctively, that doesn’t feel too bad; it’s only one or two more notches on the lens, but that’s up to four times the light, and that’s significant.

In reality, most people shooting with larger sensors are not significantly increasing the amount of light they’re using. There’s perhaps an argument that the sensitivity and noise performance of digital cinematography has made us lazy; in the middle of the twentieth century the effective sensitivity of processes such as three-strip Technicolor was sometimes in the single digits, and light sources vastly less efficient, far bulkier and more demanding on crew and support equipment to work well. It’s been said that despite the modern interest in shallow depth of field, a standard shooting stop for a typical late-twentieth-century feature film working on 35mm film was f/4, which produces a depth of field similar to a 2/3” video camera with the lens wide open.

Figure 2 – For a constant viewing angle of 59 degrees, the focal length can be seen to increase as the sensor size also increases from 2/3” sensor to the Arri Alexa 65. For example, to achieve a 59 degree viewing angle on a 2/3” sensor a focal length of 10mm is required, and to achieve the same viewing angle on a Blackmagic URSA sensor, a lens with a focal length of 26mm is required (all measurements are rounded to zero decimal places).

Figure 2 – For a constant viewing angle of 59 degrees, the focal length can be seen to increase as the sensor size also increases from 2/3” sensor to the Arri Alexa 65. For example, to achieve a 59 degree viewing angle on a 2/3” sensor a focal length of 10mm is required, and to achieve the same viewing angle on a Blackmagic URSA sensor, a lens with a focal length of 26mm is required (all measurements are rounded to zero decimal places).

And, as we know, it is far from coincidental that “wide open” can mean f/1.3 on a 2/3” video lens, because it’s so much easier to build very fast lenses for smaller sensors. Zeiss’s Digiprime range was built for digital cinematography on 2/3” cameras and uniformly achieves T1.6 out to 70mm, a reasonably long lens on such a small sensor – and they’re not that large.

Either way, the solution to the concerns of focus pullers faced with a large format shot at 125mm and f/1.8 is not, usually, more light, because light levels are so often controlled more by budget than by photographic need. Sometimes, the solution is to select a different – higher – sensitivity in the camera. Two stops are equal to about 12dB of gain in the language of broadcast camera engineering and more than most people would want to apply to, say, a newsgathering camera. In 2019, though, a lot of digital cinematography cameras are capable of effective sensitivities in the thousands of ISO to begin with.

Does this solve the problem? Well, it depends if all that sensitivity is already being leveraged to reduce the lighting budget (or make larger locations practical, or to shoot more with practical lighting, or for any other reason.) Is any production likely to back down from those things in order to give the focus puller an easier time? In the end, it might be the case that larger sensors are at least something of a zero sum game. As we saw in part 2, larger sensors give us the option of more sensitivity, dynamic range or noise performance. If we push sensitivity beyond where we normally would, then there’s a risk we’ve traded off the very advantages we were seeking with a larger sensor in the first place.

In the end it's great that there are enough camera choices for most productions to have what they want, even if the fight between f-number, sensitivity and depth of field can become something of a stalemate. If it’s any consolation to the first assistants of the world, the move toward larger sensors has happened more or less at the same time as a massive increase in the quality of on-set monitoring, although even the most experienced will admit that the best monitor will only tell us once something has already gone soft.

As to how the average focus puller is to deal with the new reality, when productions want all the advantages of larger sensors without the increase in light level, the solution is often simple: concentrate harder, people. Think of it as an opportunity to shine.

You might also like...

NAB Show 2024 BEIT Sessions Part 2: New Broadcast Technologies

The most tightly focused and fresh technical information for TV engineers at the NAB Show will be analyzed, discussed, and explained during the four days of BEIT sessions. It’s the best opportunity on Earth to learn from and question i…

Chris Brown Discusses The Themes Of The 2024 NAB Show

The Broadcast Bridge sat down with Chris Brown, executive vice president and managing director, NAB Global Connections and Events to discuss this year’s gathering April 13-17 (show floor open April 14-17) and how the industry looks to the show e…

Essential Guide: Next-Gen 5G Contribution

This Essential Guide explores the technology of 5G and its ongoing roll out. It discusses the technical reasons why 5G has become the new standard in roaming contribution, and explores the potential disruptive impact 5G and MEC could have on…

Comms In Hybrid SDI - IP - Cloud Systems - Part 1

We examine the demands placed on hybrid, distributed comms systems and the practical requirements for connectivity, transport and functionality.

Audio For Broadcast - The Book

​Audio For Broadcast - The Book gathers together 16 articles into a 78 page eBook which explores the science and practical applications of audio in broadcast.  This book is not aimed at audio A1’s, it is intended as a reference resource for …