HDR: Part 21 - Creative Technology - The Size Of Light

Most people are aware that the amount of detail in a photograph is dependent on a few things. Assuming focus is right, it’s easy to assume it’s a combination of how well the lens was made and the number of pixels on the sensor, give or take the quality of all those things. Now we’re starting to hit issues with what we might almost think of as the size of light.

That's a shorthand, of course. The idea of light being made out of photons is an approximation that works in some situations. The idea of light being a radio wave, like a wave in the sea, is an approximation that works in other situations. Either way, the effect of individual waves of light or individual photons is so tiny that technology has generally been able to ignore them. Now, though, things are getting so good that the physics starts to affect us. We’ll touch on some of the science, but there are wider issues here about what cinema has always been, what it can be in the future, and ultimately, what we want.

Something we do want is better color reproduction. One way to describe brightly-colored lights is in nanometers, a unit of distance. Shorter is bluer, longer is redder. That's a wavelength, like a radio station, but since light is such a fast and tiny thing, that wavelength is very short. We use the wavelength of green light, 550nm, as an average, because eyes (and cameras) have peak sensitivity in the green. 550nm is 0.00055mm. By comparison, Blackmagic's Ursa Mini 12K has a sensor with individual photosites 0.0022mm across. Red light has a longer wavelength, up to around 0.00072mm, but either way the photosites are only three or four wavelengths of light across.

One side-effect of this is diffraction limitation, something that many photographers are already well aware of. We stay away from very low f-stops because they tend to reveal the practical problems of making larger, more curved pieces of glass without introducing errors. At the same time, though, we stay away from high f-stops because they also cause sharpness issues.

Airy Disk Rings

The problem is that even if we could make a perfect lens, it won’t land even a tiny beam of light on the sensor as a single point. It’ll create a pattern, with a bright peak in the center and a series of dimmer rings surrounding it. This is an Airy disk, after the astronomer George Biddell Airy, though it was noticed earlier by other stargazers who spent long nights staring through lenses at very small points of white light in the sky. We define the size of the Airy disc as the distance between the darkest parts of the first dark ring, and beyond a certain size it can result in an image looking soft.

Diffraction is caused, broadly speaking, by light waves interfering with each other. It’s more noticeable as the waves pass through a narrow gap, such as a small aperture in a camera. We can think of this, slightly approximately, as an issue of averaging. If the aperture is large, a lot of light waves will fit through, and the effects of diffraction are reduced by averaging all those waves together. With a narrower aperture, far fewer light waves fit, and there is less averaging. Most photographers know that apertures in the f-11-and-beyond range start to look visibly soft.

And that’s on cameras with fairly conventional pixel density. Fujifilm’s GFX100 has a hundred million pixels, but it’s a medium-format camera. GFX100’s photosites are similar to those in the same company’s X-T4, with its lower resolution but smaller sensor. Blackmagic’s 12K beast is less conventional; it’s roughly a super-35mm camera, but packs 80 megapixels. With such tiny, densely-packed pixels, diffraction steps in sooner.

Diffraction Limits

We can estimate how much by comparing the size of the light and the size of the iris, dividing the wavelength by the diameter of the aperture. We then multiply by 1.22, a number representing the width of that first dark ring in the Airy disc, to get the angle into which a beam of light will diverge. The diameter of the aperture in (say) a 50mm lens at f/8 will be 50 ÷ 8 = 6.25mm, and 550 nanometers is 0.00055mm. 0.00055 ÷ 6.25 × 1.22 gives us… well, a very small number. By the time that point of light has reached the sensor, it’ll have diverged into an Airy disc roughly 0.0107mm across.

But the subpixels on a 12K super-35mm sensor might be 0.0022mm across, almost one-fifth the size. That theoretical camera is diffraction-limited above f/4, counting in whole stops. Considering many lenses don’t perform very well at wide apertures either, and below f/4 represents what many people would call “wide apertures,” the average cinematographer might feel rather painted into a corner by this.

The reality is that the Ursa Mini Pro 12K is not a theoretical camera, and there are a lot of other considerations. Nyquist’s theorem, for instance, says that we can’t record digital information that’s more than half the bandwidth of the digital sampling rate we’re recording it with. That’s why 48KHz audio streams can’t reproduce sounds above 24KHz (though that’s more than enough) and it’s why an 8K camera can’t see details less than one four-thousandth of the screen across without aliasing.

Improving Down-Sampling

Most cameras do have at least some aliasing but even so, there’s no point in landing an image with 12,000 photosites worth of information on an unfiltered 12K sensor. High resolution camera developers would probably also tell us that the purpose of these very high resolutions is not to increase sharpness, though it does that. It’s to allow down-sampling for better color, noise, and aliasing performance. Still, if an IMAX production actually wants 12K deliverables and the convenience of super-35mm sensors, it’ll need to carry a wide range of ND filters.

Diffraction is an example of light waves causing issues. Sometimes, just to be awkward, it’s useful to think of light as a particle, too. One of the things that means is that light is not what we might call a continuous phenomenon; it comes in discrete pieces. It’s quantized, which is where quantum theory comes from. What that does mean is that you can’t have half a photon; there is some minimum amount of light, and any less than that is none at all. There are not an infinite number of brightness levels.

In reality, even the human eye, which is still more sensitive than most cameras, struggles to detect a single photon, so this isn’t something we’re really aware of. Certainly, the deepest black shadows in most scenes will still have more than one photon coming out of them every video frame, and cameras still see them as black. On the other hand, sunlit scenes will be saturated with uncountable trillions of photons, so one more or less makes no visible difference. The ability of a sensor to detect those really small changes in brightness is controlled mainly by things like the thermal noise caused by heat, which would bury a single photon difference in brightness. Some sensors, though, increasingly have noise of just a handful of photons, so we can see really small changes in light level.

Random Photons

That’s great for picture quality, but it creates a complicated problem. A camera determines how bright an object is by counting the photons bouncing off it, and that number depends on some random influences. A light source emits a certain average number of photons per second, but, within the beam shape of that light, photons come flying out in randomly-determined directions. There are so many of them that we simply see the average distribution – a beam – and we aren’t really aware of the idea of individual photons. In the same way, when we illuminate a matte, non-specular object with a beam of light, each photon has something approaching an equal chance of bouncing off in any random direction. Again, with lots of light, we see the average; we see a light illuminating a surface.

Things change when light levels are very low, and the exposure time of a single image – a video frame – is reasonably short. It’s quite possible that, during our video frame’s exposure, more than the average number of photons happen to make it from light to lens. The object looks brighter, at least in one photosite. Next frame, the opposite might happen, and that pixel might look darker. It’s a random variation, based on nothing more than flipping a coin and happening to get a long run of heads in a row. What we want to record is what the average illumination of the object is; what we get is a random snapshot of what the illumination actually was during a specific window of time, which results in random variations.

Shot Noise Remedy

We call this shot noise, and it’s a sampling error just like asking too few people for an opinion while taking a survey and getting a biased result. There is quite simply no solution to it beyond brute force: increasing the amount of light, increasing the exposure time of the frame, opening up the lens or increasing the size of the photosite on the sensor; anything to capture more photons and get a better average. We can apply noise reduction in post, which will work on shot noise as well as it will on any other sort of noise, but it’s never a perfect solution.

So what does all this tell us? Modern digital cameras are approaching the point at which they can record reality with almost as much precision as reality has to record. Perhaps fundamental changes, moving toward things like the lightfield array cameras proposed by the Fraunhofer research institute, might allow us to get around some of these limitations, using computer horsepower to generate virtual cameras with impossible characteristics.

Remembering Our Goals

But in the end, we might ask whether we want or need to do that. Despite many attempts to introduce new technologies, the fundamental demands of cinema have been fairly static since the middle of the twentieth century. We want a somewhat widescreen image, sharp enough to look at least as good as 35mm 4-perf negative on a 40- or 50-foot screen (or a much smaller, much closer home display). Even basic modern cameras are capable of that, and work fantastically in light levels that would barely have produced a vague cluster of moving shadows on film stocks of the 1950s. Cinema as a medium has been resistant to change. Perhaps, in the same way that spoken-word radio did not replace books and television did not replace radio, cinema in something like its current form is here to stay.

It’s always possible to hypothesize about future exhibition formats, or even refer to the outliers of modern practice. IMAX can use as much resolution as it can get its hands on, and films for theme park rides often use very specialist approaches to create very specific effects. Even overlooking those special exceptions, there are things we could improve. More dynamic range and better color reproduction are constant targets, but in terms of resolution and sensitivity, some fundamental limits are approaching fast. And in the end, isn’t this more or less where we all wanted to be?

Broadcast Bridge Survey

You might also like...

HDR & WCG For Broadcast - Expanding Acquisition Capabilities With HDR & WCG

HDR & WCG do present new requirements for vision engineers, but the fundamental principles described here remain familiar and easily manageable.

What Does Hybrid Really Mean?

In this article we discuss the philosophy of hybrid systems, where assets, software and compute resource are located across on-prem, cloud and hybrid infrastructure.

HDR & WCG For Broadcast - HDR Picture Fundamentals: Color

How humans perceive color and the various compromises involved in representing color, using the historical iterations of display technology.

Audio At IBC 2024

Great audio is fundamental to any great broadcast and professional audio remains one of the busiest areas of the show both in terms of number of exhibitors and innovative new technologies on show. IP and cloud developments seem set to…

Encoding & Transport For Remote Contribution At IBC 2024

The technology required to get high quality content from the venue to the viewer for live sports production remains an area of intense research and development, so there will be plenty of innovation and expertise in this area on the…