The recently-released AJA CION camera
Sometimes we can make an informed decision about which camera acquisition format to use, other times we take what we get and fix it in post. Both cases require an understanding of the tradeoffs manufacturers have made in producing a camera designed to sell at a particular price point. Without getting into why manufacturers make multiple models or why a particular technology gets transformed into product at some point in its life cycle. Let’s try to be agnostic.
All image acquiring technologies work within particular parameters. Some of these parameters can be affected through component selection, but those decisions may also increase costs. Other manufacturing options are simply limited by the current state of the art.
If I am renting equipment for a particular project my criteria will be different than if the cameras I purchase have to serve multiple purposes. In either case create a check list to make sure that nothing is forgotten in the euphoria over obtaining a manufacturer’s newest model camera. I like to start with the price (my budget) and then list the “Need to haves” and the “Nice to haves”.
Add to these lists; ease of use, lenses and audio options, codecs & frame rates, resolution and color depth, number of f stops an contrast range. Then consider size and weight.
Video Fidelity is a term we understand, but may be hard to define. How do we measure video fidelity? Let’s use this as a working definition; Fidelity: “Given the best reproduction system possible how close does the image compare to the original scene”. This is where increasing how much and what kind of technology (i.e. cost) is used in the manufacturing process can make a difference.
The fidelity of the best DSLR will never equal that of the best digital cinema camera, “across the board”. However there may be some instances where the images are indistinguishable from each other. If your project is destined only for YouTube or other bandwidth-limited delivery channel, then rent the less expensive DSLR instead a super, high-end camera with lots of features you’ll never use.
After considering the glass and the electronics in the camera, consider the fidelity of the image recorded. Starting at the sensor chip, size does make a difference, but it’s not the whole story. The transducers on the chip change photons into electrons. The larger each pixel the more electrons for the same ambient light, but putting a lens in front of each pixel will do the same. Increasing the inherent sensitivity of the transducer will have the same effect. This is one argument for 35mm sensors, the other consideration is depth of field.
If you really need a shallow depth of field get a big sensor. Otherwise look carefully at the sensitivity specification. Remember, the sensitivity will go down as the resolution goes up.
Spreading the same amount of light across a 4K sensor and then taking an SD cutout is not the same as shooting SD with the same sensor (combining the output from approx. 16 pixels).
While electronic imagers no longer need a shutter, we still use the term, as in rolling or global shutter. A rolling shutter charges the pixels sequentially a global shutter all at once.
The image below shows an extreme example of the kind of distortion a rolling shutter can cause. Note that the plane’s propeller is distorted and appears to be curved. The screen capture below came from an excellent tutorial by James Hutton that will illustrate this and other types of image distortion. I recommend you check it out.
A camera's rolling shutter will distort rapidly moving images. Note what appears to be a curved airplane propeller in the video clip.
Rolling shutter issues
Previously, the extra circuitry required to implement a global shutter not only increased cost but also reduced sensitivity. That was because the shutter was located in the same plane as the sensing element thus reducing area for light capture. Fortunately this is changing and a global shutter is definitely the better solution. In either case, the speed with which the charges can be transferred to memory is going to limit exposure times and frame rates. The 6 gigapixels/second throughput required for 2000+ Fps recording requires special sensors.
Here is a video that demonstrates very high quality slow motion (high frame-rate) imagery. Cinematographer Greg Wilson and Director Brendan Bellomo were asked by Vision Research and Abel CineTech to shoot the first test footage with the new Vision Research Phantom Flex4K Digital Cinema Camera. (March, 2013). The entire clip can be seen here.
A clip from test footage of the Vision Research Phantom Flex4K, shot by Greg Wilson and Brendan Bellomo. See the video in the clip URL listed above.
Another limitation is the number of photons required to get above the inherent noise level of the electronics. The voltage coming of the sensor is an analog signal, we need to change it to digital.
Whoops we almost forgot color! Each pixel is actually 3, or is it 4, or only 1 sensor elements? Do we have a single chip or 3 chips? Again what are the tradeoffs, aside from cost? You can make a smaller camera with a single chip. At one point the solution was to use a mechanical RGB color wheel. Today, if you have a single chip there is a filter in front of it, this is going to absorb some photons, with all the consequences.
The layout of the filter is a consideration, one possibility is Bayer. That affects the amount of information available in each color as well as how accurately geometric shapes are rendered. I call this “snake oil”; how many sensing elements do I need to define a point in a raster?
Many manufacturers will say “one” and define their color resolution on this basis, but as there are no full spectrum sensing elements each element senses only one color. If I have a 1920 x1080 filter array of equal RGB elements is this 691,200 RGB pixels or 2 Megapixels as many manufacturers contend? In order to get the 2 MP, 2/3 of the color information is coming from adjacent locations and may not be accurate.
Does this mean we can use a 4K sensor to get accurate color information in an HD picture? Theoretically, yes, it’s only software after all. What about high resolution or accurate color rendition? At screen sizes up to 50 inches, HD with accurate colors is going to look better than 4K with moiré and false color artifacts.This brings us to the “secret sauce”, but before that let’s get digital. We will begin that discussion next month.
You might also like...
Video, audio and metadata monitoring in the IP domain requires different parameter checking than is typically available from the mainstream monitoring tools found in IT. The contents of the data payload are less predictable and packet distribution more tightly defined…
Speculative fiction is something that’s often hovered on the borders of drama, fantasy and sci-fi, and one of the huge benefits of the streaming media revolution is that much more of the genre has found a way in front o…
There are many types of codecs, all used for specific purposes to reduce file sizes and make them easier to distribute down a limited bandwidth pipe. Lossy compression and Lossless compression are the two most common categories of data compression…
In this final part of the series, an attempt will be made to summarize all that has gone before and to see what it means.
Read too much film and TV industry technical literature, and it’s easy to get the impression that everything about the technology is built to carefully considered specifications. As Philo Farnsworth’s wife was probably aware, though, as he tinkered wit…