Knowing which camera to choose for an assignment is important—for those times when you can make this decision. Knowing how to get the best out of the resulting footage you get is necessary when you can’t. Understanding how the technology works can help in both cases.
In part one we talked about the physical construction of a camera’s sensor and tradeoffs manufacturers have to make to reach their own goals. This concluding portion will investigate the electronics and software.
The conversion of incoming photons to electrons at a photosite on a CCD or CMOS chip results in the production of a voltage proportional to the light input. The quantum efficiency of the sensor, typically under 50%, determines how much light gets converted and thus affects the S/N ratio. This voltage charges what is called the sensor’s well, which is then read out at some interval in time.
This charge is analog and varies from zero to the maximum charge the well can hold. This difference is the absolute dynamic range of the system. The read-out amplifier then matches the output range of the sensor to the input range of the converter.
These parameters, along with the required frame rate, tell us what kind of A/D converter is required. All of these processes add noise to the signal. Remember that digital camera sensors have a fixed sensitivity, so increasing the ISO is actually amplifying the signal, as well as any noise we may have accumulated. The bit depth of the A/D converter will determine the absolute tonal differentiation possible, however the S/N ratio of the entire system is the real limit.
A camera may advertise that it has an A/D converter of 14 or 16 bits, but if the rest of the system is not equally good, you are generating data with no additional information. Current sensors are essentially linear devices, the human visual system is non-linear, we see better in the dark. Raw information coming off the chip has to be matched in camera or in post processing. This brings us to the manufacturer’s “secret sauce”, but before that let’s talk a little about image stabilization and autofocus.
Stabilization and Autofocus
Image stabilization moves the image around to fit on the sensor, thus maintaining full resolution. The down side is that it adds complexity, lens noise, perhaps additional lens elements and a resulting reduction in sensitivity, as well as the relatively slow reaction time of the mechanical system becoming visible at higher frame rates.
Sensor stabilization moves the sensor around to follow the image thus allowing for the use of any lens. Another option is to move the image around so that the object to be stabilized stays in the same position in the image plane. If the actual image size is enough larger than the output image size required, there will be no loss in resolution. Otherwise a blow up will be required to avoid clipping the edges.
Consider the situation where you have an HD sensor camera, but you only need an SD picture. The processing power required to make the conversion on camera is going to add heat, especially if it has to scale the picture as well. And, the result may be limited in accuracy as compared to post-production tools you might have. Here is video that compares lens stabilization and camera body stabilization. It was produced by Florian Korn.
Autofocus (AF) is a great aid, when it works. However reducing the depth of field (35mm sensor) makes autofocus systems slower to react due to the smaller area that is actually in focus. Overshoot and pumping are caused when the scene is changing faster than the AF can react or when there is not enough information. This is visible in low light if there is no additional infrared emitter on the camera. For professional shooters, the most important thing about AF is the off button.
Up until now we have talked about matching the image recording requirements of your project(s) with the available budget. However the creative process does not stop at the camera. Integrating the data generated by the camera into the post production workflow can be seamless or seem senseless.
For instance, the data coming of the newest digital cinema camera runs around a TByte per hour. Has lossless data reduction been applied before recording? Are the systems in place to decode the image (debayering, interframe reconstruction, decoding, etc.) without delaying the workflow? Do you have an on-line working copy and off-line backup and an on-shelf safety?
For many applications a typical signal flow will look something like this:
Camera manufacturers all have their own secret sauce. What it does can affect what’s required in the post-production process.
In Figure 1, you will a little box marked “reconstruct image”. This is the camera’s secret sauce. Part of what it does is determine how fast and how well can we get the image quality we need for our production from the data generated. The reason cameras record intraframe or uncompressed, which have significantly higher data rates than interframe, even if only removing redundant data, is because the special sauce may not be good enough! When purchasing a camera, keep your options open and make sure that the camera can at least record raw as well as inter and intraframe. Even better is when the camera can stream (not necessarily IP) all 3. If renting, match the camera’s capabilities with the rest of the production chain. Don’t pay for functionality you won’t use on the shoot.
In Part 1 I promised you a 4K mobile phone. It’s early days right now and from production experience there’s been little news. Even so, never forget the photographer’s adage “the best camera is the one you have with you”. Google is already streaming footage in 4K from all 3 available phones.
A good example of mobile phone 4K imagery is shown here in this clip produced by Majid Syed:
Produced by Majid Syed
To get an idea of how the resulting video might be used I spoke to people using sportscams generating 4K video with many of the same limitations. Both David Sperl who flies drones, and Christoph Hoerner, extreme skier, lamented the frame rate limitation at 4K.
This footage of clouds, mountains and winter snow skiing was shot with a GoPro camera by Christoph Hoerner. It is time lapsed due to the low frame rate but was scanned and panned to fit the HD frame thus adding additional variety to the shots.
The 4K functionality is a plus for mobile phones, and some models work underwater without an extra housing.
Today’s mobile phones are an excellent backup camera, albeit without the image adjustments most shooters expect from a camera.
Unfortunately the available adjustments on today’s phones are limited, but still, when locked on a tripod they would make a great B-cam. The throw away nature of these cameras should also offer some interesting footage, because they are in your pocket!
You might also like...
As the myriad of live competition television shows continue to attract new and ever larger audiences for TV networks, producing them live has become so complicated that a second technical director (TD), often called a “screens TD,” is now often bei…
When a company markets two products that seem similar, both targeting much the same task, but one costs significantly less, it’s reasonable to assume the less expensive product will offer significantly fewer features.
Philo T. Farnsworth was the original TV pioneer. When he transmitted the first picture from a camera to a receiver in another room in 1927, he exclaimed to technicians helping him, “There you are – electronic television!” What’s never been quoted but lik…
The U.S. Department of Transportation (DOT) and Federal Aviation Administration (FAA) have announced a new Interim Final Rule banning the transportation of lithium-ion batteries in passenger aircraft cargo.
NASCAR Productions, based in Charlotte NC, prides itself on maintaining one of the most technically advanced content creation organizations in the country. It’s responsible for providing content, graphics and other show elements to broadcasters (mainly Fox and NBC), as w…