Sensor Developments, a Look Forward

The core of any camera is the sensor, and along with the lens, they define and constrain the performance of the camera more so than the downstream processing. There have been many advances in sensors, with the move from vacuum tube devices to semiconductors being one of the great leaps. Although early solid-state cameras used charge-coupled devices (CCD), the favored technology today is complementary metal oxide semiconductor (CMOS). New developments from Panasonic and Sony show that developments continue apace as we head for 8K, HDR and 120P. Panasonic has developed further their organic photoconductive film which separates the function of photon capture from exposure charge accumulation, plus introduces an in-sensor ND filter and global shutter. Sony has developed a sensor with new global shutter design.

The principle of the CMOS sensor is an array of photo-sites comprising light sensitive photo diodes. During the exposure, incident photons are converted to electrons, which manifest as electrical charge. At the end of the exposure period the charge is read out to an analog-to-digital (A/D) converter, and the photo-diode reset ready for the next exposure.

The photo-sites can be read row-by-row in a scanning motion or rolling shutter. If the scanning is a considerable fraction of the exposure time, then subject motion will give rise to the ‘jello’ effect, where moving object are skewed.

Global Shutter

The alternative is to read all the photo-sites at the same time by transferring the charge to storage local to the photo-site, and then read out as before. The latter method is referred to as the global shutter.

So, why aren’t all sensors using global shutter technology? Well the additional electronic components as each photo-site take up potential light-gathering area, so the sensitivity is decreased. Most camera manufacturers have focused on increasing the readout speed of a rolling shutter sensor, rather than sacrificing sensitivity and dynamic range to the global shutter design.

What is BSI?

Early sensors, like the retina, have the ‘wiring’ running on the side of the sensors where the image is formed. This impedes photon gathering carried out by the on-chip lens and has also been an important issue in the miniaturization of photo-sites.

A back-illuminated structure moves the wiring and transistors to the reverse of the silicon substrate.

BSI moves the wiring to the back of the sensor.

BSI moves the wiring to the back of the sensor.

Current Sensor Performance

Today’s cameras have low noise and high sensitivity. They can capture excellent images in near darkness, something just not possible a decade ago. If you can see the subject, you can capture it. So, have we reached the limits of what we need in a television camera?

There are issues, highlighted by UHD productions, especially the super slow-motion cameras used in sports. One issue is noise, another is dynamic range.

The dynamic range of a sensor is determined by the amount of light it can capture. For each photo-site, the primary factors are the area of the site, and the exposure time. The larger the area, and the longer the exposure, the more photons can be collected.

Photo-site area

The area of each site is a function of the size of the sensor—2/3 inch, super 35 and others—and the resolution. An HD 2/3 sensor has 5µm photo-sites, a UHD sensor, 2.5µm photo-sites, which is one-quarter the size and one-quarter the light-gathering area. UHD supports higher frames rates, 50 versus 25 fps, 60 versus 30 fps, so half the exposure. That all adds up to a UHD photo-site receiving one-eighth of the light of that in an HD sensor.

The casual observer may suggest using larger sensors, super 35 or full-frame. The latter sensor with UHD resolution has a pixel grid around 10µm, sixteen times the area of a 2/3 sensor. That would fix the problem, well yes, but for sports production, the expectation is that the lens should have the zoom range of a 2/3 in camera, 90:1 or so. Zooms for full-frame cameras are rarely more than 3:1.

Noise — the Backstop

How about increasing the gain? That works, but the dynamic range drops and the noise increases. Even if the electronic were to contribute no noise, there are still lower limits set by an attribute of physics. The photons of the light falling on the sensor arrive at slightly random intervals, a consequence of quantum statistics. The shutter samples at regular intervals, so from one frame to the next there is a small, but measureable variation in the number of photons captured. This photon or shot noise is most noticeable when the number of photons is small, in the dark areas of an image. In the highlights, the number of photons arriving drowns out the small variation.

Photon noise, a quantum effect, ultimately limits low-light performance.

With high dynamic range (HDR) productions increasing, losing the potential dynamic range of the camera is not ideal. The upshot is that UHD HDR productions are still challenging the sensor designers.

It has been demonstrated that good, quiet images can be capture at UHD resolution, 60P and HDR with current sensor technology with existing lighting levels in stadiums. Looking forwards if does become more difficult. 8K 120P exacerbates the problems, with even shorter exposure times, and more pixels.

Lurking in the background is the global shutter issue. The higher the frame rate, the more difficult it is to read a row of photo-sites sufficiently fast to avoid the jello effect.

Super Hi-Vision and the Push Forwards to 8K

The NHK Super Hi-Vision project has thrown down the challenge to camera and lens developers, much in the way that Hi-Vision did with the push to move up from SD the HD. But broadcast television is not the only use for sensors. Astronomers look for sensitivity. Machine vision needs all round performance, with global shutter important where geometric distortion of the scene would not be acceptable. And there is the smartphone camera. All the many uses for sensor mean that development in one area can benefit others.

Panasonic OPF CMOS Sensor

Panasonic has already announced a roadmap to 8K cameras, and a recent announcements details some of the fruits of their research. The announcement concerns an 8K, 60P sensor with several new technologies:

  • 1.Organic Photoconductive Film (OPF)
  • 2.In-pixel capacitive coupled noise cancellation
  • 3.In-pixel Gain Switching
  • 4.Voltage controlled sensitivity modulation

In a conventional sensor, a photo-diode capture photon converting them to electrons which are held as charge until that is read out to the analog-to-digital convertor. The diode performs the photo-electric conversion and accumulates the light value during the exposure by storing charge.

The OPF layer provides shutter and ND filter functions.

Organic Photoconductive Film, OPF

The OPF separates the functions, with the photoconductive film performing the photo-electric conversion, and the charge stored in a separate device.

Photons only penetrate silicon by a few µm, so the capacity of a photo-diode is largely dependent on the area of the photo-site. For 5µm photo-sites the capacity, called full well, may be around 50,000 electrons. The OPF has a higher capacity and can make more efficient utilization of the photo-site area. The 8K sensor Panasonic has developed has a saturation value of 250,000, which in terms of light is more than 2 stops over a typical sensor. This increased saturation point leads to higher dynamic range.

In-pixel Capacitive Coupled Noise Cancellation

One of the obstacles to any sensor designer is how to read out and reset the photo-sites with creating noise that can couple into adjacent components. Panasonic has developed a new structure that can cancel photo-site reset noise.

In-pixel gain Switching

With shades of the Varicam, the 8K OPF CMOS sensor can switch between two sensitivity modes, with a full capacity of 4,500 electrons in the high sensitivity mode and 450,000 in the low sensitivity mode.

Panasonic use the OPF layer as a variable shutter/filter/

Panasonic use the OPF layer as a variable shutter/filter/

Voltage controlled Sensitivity Modulation – Global Shutter and ND filter

The sensitivity of the OPF can be simply controlled by controlling the voltage applied to it. This can be used to implement a global shutter feature and as a variable neutral density (ND) filter.

Sony and the Global Shutter

When the Venice camera was announced, the Sony spokesperson detailed that in the quest for sensitivity, the company had not adopted a global shutter, in order to maximize the light-gathering capability.

At the recent International Solid-State Circuits Conference, the company described a design for a BSI sensor with global shutter.

Although only a research paper, it indicates a direction the company could take for real products. It should be born in mind that the need for global shutter is more critical in machine vision applications, so this development is not necessarily for the broadcast sector.

Sony researchers have bonded an array of A/D convderters to the rear of a CMOS imager to realize a global shutter.

Sony researchers have bonded an array of A/D convderters to the rear of a CMOS imager to realize a global shutter.

Conventional CMOS image sensors read out the signals from pixels row by row to the A/D converters, which results in image distortion (jello effect) caused by the time shift during the row-by-row readout.

The new Sony sensor comes with newly developed low-current, compact A/D converters positioned beneath each pixel. These A/D converters instantly convert the analog signal from all the simultaneously exposed pixels in parallel to a digital signal to temporarily store it in digital memory. This architecture eliminates distortion due to readout time shift, making it possible to provide a global shutter function. This is an industry-first for a high-sensitivity back-illuminated CMOS sensor with pixel-parallel A/D Converter with more than one megapixel.

The inclusion of nearly 1,000 times as many A/D converters compared to the traditional column A/D conversion method means an increased power demand, addressed by developing a compact 14-bit A/D converter with low-current operation.

Both the A/D converter and digital memory spaces are secured in a stacked configuration with these elements integrated into the bottom chip. In addition, a newly developed data transfer mechanism is implemented into the sensor to enable the high-speed massively parallel readout data required for the A/D conversion process. At only 1.33M effective pixels it’s a way off 4K applications, but an interesting development none-the-less.

… And Finally

New designs and new fabrication methods are pushing the envelope for camera sensors. With the immediate goal of commercially available 8K 120fps cameras in sight, the benefits will trickle down to digital cinema and UHDTV. We have come a very long way from the iconoscope and the curiously-named image dissector, both vacuum tube devices from 1930s cameras.

You might also like...

Why AI Won’t Roll Out In Broadcasting As Quickly As You’d Think

We’ve all witnessed its phenomenal growth recently. The question is: how do we manage the process of adopting and adjusting to AI in the broadcasting industry? This article is more about our approach than specific examples of AI integration;…

Designing IP Broadcast Systems: Integrating Cloud Infrastructure

Connecting on-prem broadcast infrastructures to the public cloud leads to a hybrid system which requires reliable secure high value media exchange and delivery.

Shooting Apple TV Series ‘Constellation’ With Cinematographer Markus Förderer

We discuss the challenges of shooting the northern lights in the winter dusk and within the confines of a recreated International Space Station with cinematographer Markus Förderer.

Designing IP Broadcast Systems: Where Broadcast Meets IT

Broadcast and IT engineers have historically approached their professions from two different places, but as technology is more reliable, they are moving closer.

Audio At NAB 2024

The 2024 NAB Show will see the big names in audio production embrace and help to drive forward the next generation of software centric distributed production workflows and join the ‘cloud’ revolution. Exciting times for broadcast audio.