The push to create the ideal digital cinematography camera has now been going on for, arguably, two decades. There were a couple of standout attempts in the 1980s involving high definition tube cameras, but the introduction of Sony’s HDCAM tape format in 1998 served more or less as the starting point of recognizably modern digital cinema. Since then, a huge effort has been made to meet the standards of a century of conventional, photochemical moviemaking. Arguments about whether that’s been achieved, or ever will be achieved, seem likely to rage forever, but in 2019 there seems at least some interest in going way, way beyond (some parts of) what 35mm film could ever do. The question is why.
If that claim seems controversial on its own, consider that the conventional cinema release process was of fairly limited resolution by modern standards. Equating film and digital imaging in terms of resolution is notoriously difficult, but by the time the original camera negative of a 35mm motion picture has been cut, duplicated to an interpositive, duplicated again to an internegative and then duplicated yet again to a print, much has been lost – and that assumes a fine-grained camera negative to begin with. Release prints struggle to achieve 1.5K equivalent even in ideal circumstances. There’s more to this than resolution, but modern cameras had already exceeded at least the sharpness and quiescence of 35mm, even before full-frame sensors emerged.
Perhaps this makes sense; it’s not as if the 35mm film format was specifically designed to have certain depth of field or resolution characteristics. There’s no really good technical reason for it to be a benchmark; it is a de facto standard at best. The man most responsible for the imaging characteristics of commercial filmmaking is Thomas Edison, though interestingly, he probably didn’t quite intend his approach to be universal. Edison’s Kinetoscope system was probably the first to use film 35mm wide with four sprocket holes per frame, and Edison enjoyed patent protection on that idea until losing a court battle on the matter in 1902.
The rest, of course, is history: Edison’s design became the standard for over a century, and even now we build digital cinema cameras to approximate two different varieties of 35mm film. The one that’s exciting people in 2019 is the horizontal approach, similar to a stills frame and sometimes equated to still photography or the 1950s VistaVision format. It’s worth a digression on the terminology here, because most modern productions aim for aspect ratios of 1.78:1, 1.85:1 or around 2.4:1, whereas VistaVision shot a 1.66:1 frame inside a 36 by 25.17mm negative and still photos are often even squarer at 1.5:1. Calling those 8-perf cameras “large format” is also something of a misnomer since large format is a term with a predefined meaning: a four-by-five inch negative, which is bigger even than a 56mm-high medium-format frame.
There are even bigger options. Alexa 65 shoots a 54.12 by 25.59mm, 6.5K-wide frame. What’s clear in 2019 is that it’s becoming easier to accommodate these demands. The technology to make ever larger sensors is, if not trivial, at least more accessible than ever. Arri called the Alexa 65’s sensor A3X as a direct recognition of the fact that it is, in effect, three of the Alexa’s ALEV sensors, rotated vertically and stacked side by side. Probably that’s been done because the company knows its sensor is widely admired, which is perfectly fair, but it’s an indication that sensor design is now much more about what we choose to create, than what the technology constrains us to create.
The question then is what we want. Arri has been public about the fact that not every production can or should aim for sizes beyond super-35. Once, super-35 was “big chip” compared to the existing 2/3” video cameras which were being pressed into service in early digital cinematography.
The history of electronic imaging sensors is, if anything, more complicated than that of film. No part of a 2/3” imaging sensor is actually two thirds of an inch across; the sizing came from the 2/3”-diameter Vidicon vacuum tube technology used in cameras until the 1980s. A modern 2/3” CMOS or CCD sensor is generally around 8.8 by 6.6mm. The resulting 11mm diagonal is around 0.43”, far less than the 0.66” hinted at by the description. 0.43” is the area of a Vidicon tube that was actually used to detect a picture, and that sizing was maintained through later cameras so that lenses would be compatible.
What that illustrates is the problem faced by modern cinematographers looking for lenses, particularly zooms, that are fast, lightweight, high ratio, and cover full-frame cameras. Modern broadcast cameras emulate the older 2/3” technology specifically so that it’s possible to create fast zoom lenses with enormous range that are reasonably compact and reasonably affordable. It could have gone another way: digital cinema wasn’t the first to use really large sensors. Image Orthicon tubes of the 1950s and 1960s could be up to 4.5” in diameter. As with 2/3” tubes, the image would have been smaller than the size of the tube itself, but it would certainly have been larger than common film formats.
The difference is that those huge tubes weren’t made that size specifically because there was a desire for big sensors. It seems unlikely that the microscopic depth of field (or huge light levels) and huge, heavy, ruinously expensive lenses implied by 4.5” tubes was viewed positively. They made portable cameras, particularly handheld cameras, very difficult in terms of both the bulk of the hardware and the lenses required to throw an image onto such a postcard-sized sensitive zone. It’s hardly surprising that smaller imagers seemed like a good idea, hence the 2/3” Vidicon and the solid-state silicon sensors that followed it.
The first patents covering CCDs were issued in 1971. Perhaps shockingly, the first digital stills camera, boasting a hundred-pixel square resolution and storing 30 images on cassette tape, was developed by Steven Sasson at Kodak less than five years later. Arguably the first digital still camera image was photographed by Sasson in December 1975, and within a year, consumers could buy a 32-by-32-pixel camera in kit form from Cromemco.
First Digital Sensors
But CCD sensors big enough to capture the whole image cast by a cinema lens were another two or three decades away. The Cromemco camera used, quite literally, a repurposed memory chip with 1,024 individual bits of memory which were laid out on the chip in a 32 by 32 grid. Store digital ones in the memory, shine light at the silicon itself, and slowly those ones will be erased to zeroes. By repeatedly reading out the contents of the memory over a short period, the brightness of the light falling on any one “pixel” could be estimated by how long it took to return to a zero state. It was primitive, but it showed that digital imaging was possible.
The Cromemco’s sensor was also tiny, a mere few millimetres square, and it took CCD sensors thirty more years to get big enough that cinema lenses could be used with the same field of view as 35mm film. By that time, CMOS imagers were becoming a plausible alternative, and the Viper, and Sony F23 and F35 (and related Panavision Genesis) were really both the first and last generation of digital cinema camera to use CCDs. Even then, special lenses were common: Viper and F23, being three-chip designs based heavily on broadcast camera practice, demanded lenses designed to land their images precisely on the three separate sensors. Zeiss’s Digiprime range was designed for exactly that, built for single-camera drama crews with a focus puller hard at work, but that approach became a technological cul-de-sac that barely lasted a decade.
Cinema Glass Compatibility
So, by the late 2000s – and certainly after the introduction of Alexa in 2010 – it was not difficult (though still not cheap) to find a camera compatible with a much-loved set of cinema glass, and one that would create images which were at the very least acceptable to cinema audiences. It might have seemed that the fight was won, but no: it seems the industry wants sensors the size of VistaVision frames, or thereabouts.
The push for more capability in the camera department has now gone beyond just trying to duplicate what conventional filmmaking had been doing for the latter few decades of the twentieth century. It’s not really about the depth of field, and it’s certainly not about a desire to use existing lenses. Quite the opposite, in fact: the lens manufacturers have had to put on a flurry of activity to satisfy the appetite for lenses of all varieties – prime, zoom, spherical, anamorphic – to facilitate full-frame filmmaking.
The question, then, is what do bigger chips really get us, which is a subject for my next article.
You might also like...
Computer game apps read compressed artificial world descriptions from a disk file. This artificial world is regenerated by the CPU and loaded into the GPU where it is displayed to the gamer. The gamer’s actions are fed back to t…
We live an era of immensely powerful post-production tools with advanced color-correction and software plug-ins to serve every conceivable function. We can routinely remove guy wires from scenes, change day to night, and add just the right amount of coral…
Last time, we talked about the history that created modern digital cinema technology, and particularly the factors which lead to the modern push for ever larger sensors. It’s been going on in some form for twenty years, to the p…
It’s not controversial to say that film production in London has been booming for a few years, and there’s no real secret as to why: in 2006, Gordon Brown’s government introduced tax incentives that have played at least a par…
People have been making pictures for both the big and small screens for almost a century. In an industry with a history that long, it’s no surprise that the perpetual search for something new has long been tempered by a…