Creative Technology: Getting It Right In Camera

With the advent of log recording and higher resolution, and large-format cameras, DOPs are increasingly entertaining the notion that just about anything can be ‘fixed’ or finished in post.

At first glance, there appears to be great truth in this, as the downstream power of cinematographers continues to grow exponentially. Today, with relative ease, we can change day to night, tamp down high-contrast scenes captured in bright sun, and remove intruding or distracting objects like utility wires with a few clicks of the mouse. So yes, given today’s RAW and lightly compressed recording formats like ProRes 4444, we have more ability than ever to crop, adjust, and remediate images post-camera. The question is: Does it still make sense to get it right in camera?

Gone are the days of carrying a horde of 100 image control filters to set. Within certain limits, most straightforward color and white balancing tasks can be achieved post-camera.

Gone are the days of carrying a horde of 100 image control filters to set. Within certain limits, most straightforward color and white balancing tasks can be achieved post-camera.

For quasi-routine tasks like image stabilization and wire removal, post-camera remediation is certainly reasonable, and most DOPs have adopted the approach to tackle such chores. Today, most folks agree that subtle adjustments to white balance, color, contrast, and noise level, can safely be addressed in the post-production environment. More significant shifts, however, in one or more color channels, are another matter entirely as such dramatic changes can increase noise and/or the loss of fine detail and color fidelity.

For DOPs, the issue becomes really knowing our limits downstream, and how much post-camera maneuvering is possible or practical.

Of course, baking in the show LUT on set would obviate the need for much post-camera shenanigans. While some cameras and workflows allow for capturing multiple streams with and without an applied show LUT, the fact is some tweaking downstream is pretty much a foregone conclusion.

The amount of camera stabilization, for example, that is possible and achievable in post, is dependent on many factors, including frame size, resolution, and codec. DOPs must assess the degree of post-amelioration desired, and figure in their particular camera setup and recording parameters.

The flicker from out of sync TV sets, computer monitors, and discontinuous lighting such as neon signs is another area of concern that requires on-camera addressing. DOPs should strive to reduce or eliminate the flicker through use of the variable, clear scan, or synchroscan shutter, like that found in later model Panasonic Lumix GH cameras. Shooting 24p in 50Hz countries? Set the camera shutter (in degrees) to 172.8º. Shooting 30p in 50Hz countries? Set the camera shutter to 108º, 216º, or 324º. The use of the variable or synchroscan shutter is the key to avoiding flicker from a field frequency mismatch.

The Panasonic Lumix GH6 features a synchroscan shutter that eliminates flicker from computer monitors and other asynchronous light sources. Addressing such issues in-camera obviates the need for ineffective, less-than-ideal post-camera solutions. Some earlier model Lumix GH series cameras also had this feature.

The Panasonic Lumix GH6 features a synchroscan shutter that eliminates flicker from computer monitors and other asynchronous light sources. Addressing such issues in-camera obviates the need for ineffective, less-than-ideal post-camera solutions. Some earlier model Lumix GH series cameras also had this feature.

Hoping to remove flicker in post is a tactic fraught with peril. While software solutions exist and may work on occasion, they tend to work poorly or not at all. For DOPs, the effectiveness of a post solution is a function of the flicker cadence; a regular and predictable pattern from frame to frame is much easier to ameliorate in software. Severe flicker, like the kind typically encountered in urban nightscapes illuminated by neon or mercury vapor, can produce widely varying exposure from frame to frame. It is the varying underexposed frames, lacking detail with deep impenetrable shadows, that cannot be fixed in post.

There is also the matter of performing significant cropping after the initial image capture, a practice that has gained popularity owing to today’s very large frame sizes. If applied excessively, this narrowing of field of view without a corresponding reduction in depth of field produces a highly unnatural, potentially audience-alienating effect. The cropping of scenes in post is not the same as using a longer lens on the camera!

The objectionable flicker in time-lapse scenes or in scenes containing discontinuous light sources is almost always best to address during original image capture. The flicker’s irregular cadence produces significant underexposure from frame to frame that precludes a quick and easy digital fix downstream.

The objectionable flicker in time-lapse scenes or in scenes containing discontinuous light sources is almost always best to address during original image capture. The flicker’s irregular cadence produces significant underexposure from frame to frame that precludes a quick and easy digital fix downstream.

Getting it ‘right’ in camera requires minimizing the noise that can deleteriously impact the video quality downstream. Understandably, we are reluctant to apply NR in-camera since this can lead to the loss of fine detail, along with the noise.

If reduced frame size and resolution upon output is a viable option, DOPs can adopt a strategy of oversampling during image capture. Shooting 4K for HD release? The downscaled output averages four pixels into one, eliminating the noisy pixel or pixels, and producing a virtually noise-free picture in the HD stream.

In most productions, reducing or eliminating noise is a laudable goal that can be facilitated through use of a physical, image-enhancing ‘diffusion’ filter. Filters like Schneider’s Digicon and Tiffen’s many iterations of GlimmerGlass can serve as excellent grain-tightening, noise-reduction utilities.

To DOPs, getting it ‘right’ in camera is critical to avoid tough-to-resolve problems downstream.  Camera setup – black level, LUT, frame size, etc. - and physical ‘grain-tightening’ diffusion and polarizer filters are prime considerations. The flattering look of a Schneider Digicon containing many irregularly interspersed elements, is difficult or impossible to achieve via a generalized software solution.

To DOPs, getting it ‘right’ in camera is critical to avoid tough-to-resolve problems downstream. Camera setup – black level, LUT, frame size, etc. - and physical ‘grain-tightening’ diffusion and polarizer filters are prime considerations. The flattering look of a Schneider Digicon containing many irregularly interspersed elements, is difficult or impossible to achieve via a generalized software solution.

In broad strokes, a scene’s contrast, look, and feel, can also be addressed digitally by adjusting the camera LUT or black level. Ideally, such approaches should be used in tandem with a measured downstream strategy, as most DOPs will invariably tweak, however slightly, the contrast or milkiness of scenes during grading and color correction.

It is critical to note that detail not captured in the original image is lost forever, and cannot be restored later. Accordingly, many DOPs get it ‘right’ in camera by utilizing a polarizing filter for nearly every setup. The improved rendering in the sky and clouds, reduced glare off glass surfaces, and enhanced texture in actors’ skin, are usually desirable effects that cannot be satisfactorily reproduced or approximated in post. Suffice it to say, the polarizer is the only physical filter that can increase resolution and the level of detail in the captured image.

In years past, for most DOPs, ‘fixing’ an image or finishing it in post entailed a complex process that was impractical and pricey. With the advent of low-cost digital tools such as Adobe’s AI-powered neural filters and vector stabilizers, post-camera processes have become an integral part of our modus operandi. Savvy DOPs would do well to understand that getting it ‘right’ in camera is still a worthwhile and eminently critical goal as we grapple with the promise and limitations of our evolving craft.

You might also like...

Standards: Part 6 - About The ISO 14496 – MPEG-4 Standard

This article describes the various parts of the MPEG-4 standard and discusses how it is much more than a video codec. MPEG-4 describes a sophisticated interactive multimedia platform for deployment on digital TV and the Internet.

Chris Brown Discusses The Themes Of The 2024 NAB Show

The Broadcast Bridge sat down with Chris Brown, executive vice president and managing director, NAB Global Connections and Events to discuss this year’s gathering April 13-17 (show floor open April 14-17) and how the industry looks to the show e…

Audio For Broadcast - The Book

​Audio For Broadcast - The Book gathers together 16 articles into a 78 page eBook which explores the science and practical applications of audio in broadcast.  This book is not aimed at audio A1’s, it is intended as a reference resource for …

Project Managing The Creative Elements Of Live Sports Production

Huw Bevan is an Executive Producer, Consultant and Head of Cricket for Sunset+Vine, in London, one of the UK’s leading independent sports production companies that produces a full slate of rugby, soccer and cricket events each year. This…

Standards: Part 4 - Standards For Media Container Files

This article describes the various codecs in common use and their symbiotic relationship to the media container files which are essential when it comes to packaging the resulting content for storage or delivery.