Camera Matching in Multi-Codec and Multi-Gamut-Colour Space

I remember “painting” the cameras. Riding the CCU’s to make sure that the trees did not change from emerald green to grass green when the fader bar on the switcher was pulled. Colour grading in the post suite has made the crude tools we used to use outdated, but what about live production in the new multi-digital world? When colours change between shots it breaks the illusion. We have become lax about this because the tools are simply not available.

A helmet camera, a DSLR, a mobile phone and your trusty CCD, can we match them?

Since you will probably have to accept the consequences of delay anyway, See Ned Soseman’s see boat racing stories, throw some graphic card processing in each feed and it becomes possible.

If you can get a reference into the shot, “Can you center the logo on the sponsors tent?” capture that, then software can do a lot. The same logo with known lighting will tell you how it is supposed to look but can the software determine if the colour shift is because of the camera or because of location lighting? No, because you have two variables.

One thing that would help would be the logo shot with the same camera under the same lighting as your reference shot, except that we still have the variable settings on the camera. If we can get those settings off the camera (streaming metadata anybody?) then we can automate the process. That’s a lot of ifs! For the time being an operator will still be required. What is the skillset and how much time per camera/shot will be required?

Camera Matching or Colour Grading?

One of the big differences between camera matching and colour grading is the goal. Live camera matching implies that you are trying to reproduce the scene accurately, and compromising this accuracy is only done reluctantly in order to get all cameras to match. Colour grading is about the “look” and “tone” of the image achieved. As long as the systems were completely different, chemical reactions for film grading and analogue circuitry for camera control, there was no overlap. Now that the underlying hardware, graphics engines, is the same, it may be time to rethink.

This implies that the needed skill set will be the same whether grading or camera matching, except that we have to do it on a live stream—which means we got one chance at getting it right—no re-do’s

So how much image manipulation can be done within today’s graphic systems? Tons.

In this image the performer is initially show prior to image enhancements. Next, we'll let the magic of software and expertise dazzle you.

In this image the performer is initially show prior to image enhancements. Next, we'll let the magic of software and expertise dazzle you.

Now, let’s see this process in action. In this video a good graphics system has been combined with the skills of an excellent operator. You will see Figure 1 was changed into Figure 2, all with live video.

Final image enhancement.If you watch the video, you will notice the artistic skills used to manipulate the image.  Images courtesy Boggie - Nouveau Parfum, via Vimeo.

Final image enhancement.If you watch the video, you will notice the artistic skills used to manipulate the image. Images courtesy Boggie - Nouveau Parfum, via Vimeo.

OK so live Photoshop may not be want we want. Even though the final image is unreal, the point is that today’s graphic processing engines can do this live.

One grading manufacturer offers to automatically give you a primary base grade by analyzing shots containing standard colour charts. “This lets you set the source gamma, target gamma and target colour space for the chart used in your shot. Simply use the chip grid to identify the colour chip chart and the images will automatically balance, even if they come from different cameras, under different lighting conditions and with different colour temperatures”.

So if I am shooting from a helicopter I should throw one of these charts into the air and hope to get it in the shot?All kidding aside, this is a good idea, and should be part of the solution. Another part is the ability to pick an object in two shots and have the gamut etc. of the whole scene shift to match. Doing this on the center of attention will go a long way to alleviating colour shifts that may take place elsewhere. Still, running this sort of thing without some kind of operator oversight is not a good idea.

A camera control unit (CCU) on the left and a grading system on the right. Appearances alone tell you different operating skills are needed.<br />

A camera control unit (CCU) on the left and a grading system on the right. Appearances alone tell you different operating skills are needed.

Neither of these interfaces really works for what we want to do. The evolution of these interfaces, in order to achieve the results required, took decades. I have spoken to software developers and they say they are working on a solution that would not require an additional operator to achieve the simultaneous adjustment of all feeds for a specific look.

Technical image feedback, which is currently supplied by waveform monitors and vectorscopes, may disappear just as oil pressure and ampere gauges have disappeared from the dashboard. The manufacturers are looking for input from the user community, so let’s look at a typical use case.

Again from the boat racing scenario: “One unit went up in a helicopter with an iPhone camera source. It was on-air live for nearly 10 minutes, flying over of the race course lined with thousands of spectator boats tied together.”

I am not going to write a formal use case but the informal version looks like this: The cameraman notifies the TD that she is ready to transmit. The TD gives the OK. The camera person starts transmitting. The TD checks the signal and makes any necessary adjustments or asks the shooter to change something. The signal goes live when it can or must.

Legacy CCUs allow the operator to control iris, gain, etc., as if we were working on RAW data. An iPhone doesn’t give us this so we need the camera operator to intervene. The software engineering required to allow for Wi-Fi control of these features is not a major effort. Even if we cannot have a RAW stream from the camera, Wi-Fi control will give us most of the advantages.

Any required image signal processing should take place before format conversion, but include a feedback loop with the converter so that parameters can be automatically optimised. The ergonomics of the TD’s interface are open to discussion, but a simple indicator of degree of match with an optional “area of interest” should be implemented.

Manufacturers see this as an opportunity. It is up to us, the users, to tell them what we want, otherwise they will modify what they have as they see fit.

You might also like...

Designing IP Broadcast Systems: Integrating Cloud Infrastructure

Connecting on-prem broadcast infrastructures to the public cloud leads to a hybrid system which requires reliable secure high value media exchange and delivery.

Shooting Apple TV Series ‘Constellation’ With Cinematographer Markus Förderer

We discuss the challenges of shooting the northern lights in the winter dusk and within the confines of a recreated International Space Station with cinematographer Markus Förderer.

Designing IP Broadcast Systems: Where Broadcast Meets IT

Broadcast and IT engineers have historically approached their professions from two different places, but as technology is more reliable, they are moving closer.

Audio At NAB 2024

The 2024 NAB Show will see the big names in audio production embrace and help to drive forward the next generation of software centric distributed production workflows and join the ‘cloud’ revolution. Exciting times for broadcast audio.

SD/HD/UHD & SDR/HDR Video Workflows At NAB 2024

Here is our run down of some of the technology at the 2024 NAB Show that eases the burden of achieving effective workflows that simultaneously support multiple production and delivery video formats.