Big Chip Cameras For Broadcast: The Drive For Larger Sensors

Achieving a more cinematic look is about more than technology, but there are good technical reasons why large sensors are the simple but effective way to achieve improved noise, sensitivity and dynamic range.

Whatever anyone thinks about the relationship between broadcast television and single-camera drama, the history of production technology overwhelmingly associates material shot on larger sensors with the gloss of cinematic history. With that in mind, it is difficult to rationalize the push toward larger sensors in broadcast as anything other than a reach, by broadcasters, for the luster of the big screen.

Cinematic, though, is a thoroughly subjective term, and making good decisions demands a more specific understanding of the nature of that luster. Cinema technique, after all, encompasses production design, lighting and other creative considerations which are far beyond the direct influence of camera technology. Even so, there are identifiable technical characteristics of cameras built for cinema which broadcast cameras lack - and, crucially, vice versa.

Large Sensor Benefits

The differences in the pictures made by otherwise-equivalent large and small sensors are a vast subject. One of the complications is that very often, large and small sensors will not be otherwise equivalent, with significant changes in the underlying technology which affect how images look. Even so, there are some principles which underpin the thinking behind larger sensors. One good example is the reduced depth of field, a subject to be revisited later, though there are others which are less well-known but just as influential.

One is oversampling. The resolution offset between cinema and television has mostly been erased by HD and then UHD broadcasts, but scaling down a very high resolution image can create pictures that exhibit an attractive combination of sharpness and smoothness. That is difficult to achieve where the sensor has the same resolution as the distributed broadcast. Formally, oversampling can result in an image with detail limited only by the mathematical constraints of sampling theory. Larger sensors can have more pixels without suffering other compromises.

Sheer photographic performance is another benefit. Noise, dynamic range and sensitivity are related concepts, traded off by camera designers to achieve an acceptable engineering compromise. Given lenses capable of the same f number at the same field of view, larger sensor cameras will tend to have improved noise, dynamic range and sensitivity, leading to images which are quieter, better in low light, and with less harshly-clipped highlights. Highlight rendering has been a creative priority for cinema production for as long as there have been digital cinema cameras, and broadcast television now stands to reap the benefit of those same developments.

Understanding those things sheds some light on exactly what broadcasters are looking for when they choose larger sensors, and can inform the decisions made when specifying new equipment - though not every benefit derives directly from the fact that the sensor is larger.

Single-Chip

One of the less-discussed factors of large sensor cameras (in both broadcast and cinema) is that almost all of them use just one sensor, as opposed to the three typically found in most color broadcast cameras. Throughout history and right up to this day, television images have been assembled from three physically separate sensors, using color-specific mirrors to divide the incoming image into its red, green and blue components.

That no-compromise approach ensures that the image has separately-recorded red, green and blue components, but it poses challenges for cinema which ultimately led to the genesis of large-sensor cameras. Lenses are generally easier to design if they can be placed close to the sensor, but the optical components required to combine three-sensors forces those sensors a long distance behind the lens. That means lenses built for cinema, without external accessories, can never work on three-sensor broadcast cameras - and it is that consideration which provoked the development of large single-sensor cameras in the first place.

Early in the digital cinema transition, some cinema-styled lenses were made to suit three-sensor cameras, particularly the well-received Zeiss Digiprimes. Most cinematographers, though, preferred their favorite lens sets from the days of 35mm film. Making that practical would require a single large sensor with enough resolution for a pattern of red, green and blue-filtered photosites to be interpolated into a single full-color image. Lower-end, single sensor consumer video cameras had worked like that for decades, but higher end results would demand a high-resolution sensor in order to create adequately-sharp results.

Necessity being the mother of invention, large, high-resolution single sensors were duly developed. Early digital cinema cameras, such as the Sony F35 and related Panavision Genesis, used sensors with repeating columns of red, green and blue-filtered photosites. Soon, the Bayer pattern used by digital stills cameras was adopted, with variations emerging to this day in cameras such as Blackmagic’s Ursa 12K.

Some manufacturers have designed large sensors as a custom part intended solely for broadcast cameras, for reasons we will encounter later. Mostly, though, large sensor technology for broadcast cameras owes its existence to work done for the cinema and stills worlds.

Image Processing

Besides the use of single sensors, and the sheer size of those sensors, the manner in which cameras process their raw sensor data can have a profound impact on the resulting image. Some of the earliest digital cinema cameras actually were derived from broadcast cameras of the early 2000s. Thomson’s Viper FilmStream camera of 2002 was an interesting example, used on feature films including Collateral, The Curious Case of Benjamin Button and Zodiac, all well-regarded for their photography at a time when 35mm film production was still mainstream.

Viper was essentially a ⅔”, three-sensor studio camera head, modified to remove some of the more restrictive image processing and color handling choices that were and are necessary for broadcast production. Around the same time, the Andromeda aftermarket modification was released for Panasonic’s humble, prosumer, standard-definition AG-DVX100 handycam. It was built for the entry-level short filmmaker, but stands as a useful demonstration that even a basic camera could produce much more cinematically persuasive images when the data from its tiny, ⅓” sensors was treated with cinema finishing in mind. Recording the uncompressed output from these cameras was hard work for the technology of the time, but the results were hard to fault.

At least some aspects of cinematic images, it seems, do not necessarily demand a single large sensor. Some benefits of improved color processing have become part of broadcast workflows anyway, with the advent of HDR, and certain manufacturers have applied their cinema color encoding standards to broadcast cameras. Making the most of that wide-ranging picture information then becomes a matter of appropriate vision engineering.

Going Large

Despite all this, large-sensor benefits such as sensitivity, noise and dynamic range are difficult to get any other way. If a production is willing to sacrifice the convenience and familiarity of high-range zoom lenses, compatibility with historic cinema lenses is next to impossible on cameras with three small sensors. The headline capabilities around shallow depth of field are subject to realities of physics and geometry that are effectively impossible to accurately simulate.

As cinema cameras have increasingly chased the dimensions of ever more upscale film formats, the word “large” has come to describe some very large sensors indeed. Broadcast equipment manufacturers, meanwhile, have also released large-sensor cameras and seem likely to release more, though the market is far from mature. Making large-sensor cameras part of a broadcast workflow, then, involves a series of carefully-weighed choices as technology advances and the capabilities of large sensors - and small ones - continue to change and evolve.

You might also like...

Immersive Audio 2025: Why Consumers Can’t Get Enough Of Spatial Audio

Spatial audio enhances entertainment experiences by placing the listener at the heart of the action. It is used to great effect in gaming and music and in the competitive media landscape, that means broadcasters need to keep up.

Monitoring & Compliance In Broadcast: Monitoring Compute Systems

With the ongoing evolution from dedicated hardware towards software running on COTS and cloud-compute infrastructure, monitoring compute resource is vital.

Live Sports Production: Camera To Truck

Much of the OB production infrastructure has moved to IP, but has the connectivity between the cameras and the OB or backhaul also migrated to IP?

Big Chip Cameras For Broadcast: The History Of The Camera Sensor

Understanding the motivations and implications of using large sensors in broadcast, demands an examination of the historical relationship between cinema and broadcast camera technology & creative production techniques.

Immersive Audio 2025: The Rise Of Next Generation Audio

Immersive audio has gone spatial with the addition of height control and NGA formats to support it, and consumer demand is booming… but it is all still about the experience.