Tips to Selecting a Video Camera, Part 1

Sometimes we can make an informed decision about which camera acquisition format to use, other times we take what we get and fix it in post. Both cases require an understanding of the tradeoffs manufacturers have made in producing a camera designed to sell at a particular price point. Without getting into why manufacturers make multiple models or why a particular technology gets transformed into product at some point in its life cycle. Let’s try to be agnostic.

All image acquiring technologies work within particular parameters. Some of these parameters can be affected through component selection, but those decisions may also increase costs. Other manufacturing options are simply limited by the current state of the art.

If I am renting equipment for a particular project my criteria will be different than if the cameras I purchase have to serve multiple purposes. In either case create a check list to make sure that nothing is forgotten in the euphoria over obtaining a manufacturer’s newest model camera. I like to start with the price (my budget) and then list the “Need to haves” and the “Nice to haves”.

Add to these lists; ease of use, lenses and audio options, codecs & frame rates, resolution and color depth, number of f stops an contrast range. Then consider size and weight.

Video Fidelity

Video Fidelity is a term we understand, but may be hard to define. How do we measure video fidelity? Let’s use this as a working definition; Fidelity: “Given the best reproduction system possible how close does the image compare to the original scene”. This is where increasing how much and what kind of technology (i.e. cost) is used in the manufacturing process can make a difference.

The fidelity of the best DSLR will never equal that of the best digital cinema camera, “across the board”. However there may be some instances where the images are indistinguishable from each other. If your project is destined only for YouTube or other bandwidth-limited delivery channel, then rent the less expensive DSLR instead a super, high-end camera with lots of features you’ll never use.

After considering the glass and the electronics in the camera, consider the fidelity of the image recorded. Starting at the sensor chip, size does make a difference, but it’s not the whole story. The transducers on the chip change photons into electrons. The larger each pixel the more electrons for the same ambient light, but putting a lens in front of each pixel will do the same. Increasing the inherent sensitivity of the transducer will have the same effect. This is one argument for 35mm sensors, the other consideration is depth of field.

If you really need a shallow depth of field get a big sensor. Otherwise look carefully at the sensitivity specification. Remember, the sensitivity will go down as the resolution goes up.

Spreading the same amount of light across a 4K sensor and then taking an SD cutout is not the same as shooting SD with the same sensor (combining the output from approx. 16 pixels).

While electronic imagers no longer need a shutter, we still use the term, as in rolling or global shutter. A rolling shutter charges the pixels sequentially a global shutter all at once.

The image below shows an extreme example of the kind of distortion a rolling shutter can cause. Note that the plane’s propeller is distorted and appears to be curved. The screen capture below came from an excellent tutorial by James Hutton that will illustrate this and other types of image distortion. I recommend you check it out.

A camera's rolling shutter will distort rapidly moving images. Note what appears to be a curved airplane propeller in the video clip.

A camera's rolling shutter will distort rapidly moving images. Note what appears to be a curved airplane propeller in the video clip.

Rolling shutter issues

Previously, the extra circuitry required to implement a global shutter not only increased cost but also reduced sensitivity. That was because the shutter was located in the same plane as the sensing element thus reducing area for light capture. Fortunately this is changing and a global shutter is definitely the better solution. In either case, the speed with which the charges can be transferred to memory is going to limit exposure times and frame rates. The 6 gigapixels/second throughput required for 2000+ Fps recording requires special sensors.

Here is a video that demonstrates very high quality slow motion (high frame-rate) imagery. Cinematographer Greg Wilson and Director Brendan Bellomo were asked by Vision Research and Abel CineTech to shoot the first test footage with the new Vision Research Phantom Flex4K Digital Cinema Camera. (March, 2013). The entire clip can be seen here.

A clip from test footage of the Vision Research Phantom Flex4K, shot by Greg Wilson and Brendan Bellomo. See the video in the clip URL listed above.

A clip from test footage of the Vision Research Phantom Flex4K, shot by Greg Wilson and Brendan Bellomo. See the video in the clip URL listed above.

Another limitation is the number of photons required to get above the inherent noise level of the electronics. The voltage coming of the sensor is an analog signal, we need to change it to digital.

Whoops we almost forgot color! Each pixel is actually 3, or is it 4, or only 1 sensor elements? Do we have a single chip or 3 chips? Again what are the tradeoffs, aside from cost? You can make a smaller camera with a single chip. At one point the solution was to use a mechanical RGB color wheel. Today, if you have a single chip there is a filter in front of it, this is going to absorb some photons, with all the consequences.

The layout of the filter is a consideration, one possibility is Bayer. That affects the amount of information available in each color as well as how accurately geometric shapes are rendered. I call this “snake oil”; how many sensing elements do I need to define a point in a raster?

Many manufacturers will say “one” and define their color resolution on this basis, but as there are no full spectrum sensing elements each element senses only one color. If I have a 1920 x1080 filter array of equal RGB elements is this 691,200 RGB pixels or 2 Megapixels as many manufacturers contend? In order to get the 2 MP, 2/3 of the color information is coming from adjacent locations and may not be accurate.

Does this mean we can use a 4K sensor to get accurate color information in an HD picture? Theoretically, yes, it’s only software after all. What about high resolution or accurate color rendition? At screen sizes up to 50 inches, HD with accurate colors is going to look better than 4K with moiré and false color artifacts.This brings us to the “secret sauce”, but before that let’s get digital. We will begin that discussion next month.

You might also like...

Brazil Adopts ATSC 3.0 For NextGen TV Physical Layer

The decision by Brazil’s SBTVD Forum to recommend ATSC 3.0 as the physical layer of its TV 3.0 standard after field testing is a particular blow to Japan’s ISDB-T, because that was the incumbent digital terrestrial platform in the country. C…

Broadcasting Innovations At Paris 2024 Olympic Games

France Télévisions was the standout video service performer at the 2024 Paris Summer Olympics, with a collection of technical deployments that secured the EBU’s Excellence in Media Award for innovations enabled by application of cloud-based IP production.

HDR & WCG For Broadcast - Expanding Acquisition Capabilities With HDR & WCG

HDR & WCG do present new requirements for vision engineers, but the fundamental principles described here remain familiar and easily manageable.

What Does Hybrid Really Mean?

In this article we discuss the philosophy of hybrid systems, where assets, software and compute resource are located across on-prem, cloud and hybrid infrastructure.

HDR & WCG For Broadcast - HDR Picture Fundamentals: Color

How humans perceive color and the various compromises involved in representing color, using the historical iterations of display technology.