Virtual Production For Broadcast: Camera Setup, Tracking & Lens Data

Although we’re freer to use the camera and lens combination of our choice than we would be with traditional VFX techniques such as greenscreen, we discuss the changes that do need to happen around the camera, what information we generate, and how that informs the pictures rendered on the screen.

With the camera naturally at the centre of film and TV production work, a big priority for visual effects techniques is to create spectacular results without imposing too much technology on the camera department. At the same time, if there’s a single, overwhelming advantage of virtual production, it’s exactly that: cinematographers and their crews should be able to shoot advanced visual effects while working in exactly the same way they’d approach an everyday, real-world scene.

Some of the most interesting techniques, however, mean real-time rendering of three-dimensional environments, and displaying them on the video wall with correct perspective for the current camera position. To do that, the system needs to know where the camera is and how the lens is configured. Tracking that information might mean attaching things to the camera, and virtual production stages must make everything work without compromising the ease and convenience that virtual production can and should give us.

When There’s No Need To Track

There are a many different ways to track a camera and lens, although one of the fundamental truths of virtual production is that not every setup actually requires a tracked camera. Depending on exactly what’s being done, the phrase “virtual production” can cross over with “back projection” (or, similarly, front projection). Using video as part of a scene, whether it’s projected or displayed on an LED wall, yields many of the advantages of virtual production even if it’s nothing more than a static image. It’s still an in-camera effect, ably handling most types of reflection, the fine details of smoke and hair, and other subjects that green screen may handle less well.

There are limits to how this sort of configuration can be shot – usually, the camera will need to be broadly in front of the video display. Some types of camera motion will reveal the trick, as will specific types of reflection (though that’s true for fully three-dimensional, tracked-camera configurations, too). Material to be shown on the video display will need to have been shot in a manner that’s suitable for the intended purpose, which can be complicated, and considerations such as colour correction and synchronisation still need to be right. Still, this approach to virtual production hybridised with front or back projection remains very effective, and requires no camera tracking at all.

Tracking Technologies

Sometimes, virtual productions will need the flexibility in camera position that’s only provided by a fully three-dimensional scene, or even a partially three-dimensional scene, perhaps based on modified live-action footage, sometimes termed 2.5D. In that situation, we can rely on some fairly established techniques to create the necessary camera tracking data. Figuring out where something is in three-dimensional space is something that visual effects and games development people have been doing for years. It even appears in consumer technology, with virtual reality headsets capable of tracking their own position so that the wearer’s point of view can appear to move through a virtual world (sometimes, those devices can even be pressed into service as virtual production tools).

There are other ways of knowing the position and rotation of the camera, particularly by mounting it on a motion control crane which will put it in a known position to begin with. Tracking the camera, meanwhile, allows for the use of any conventional camera support equipment. The image of an actor performing in a suit covered in reflective tracking markers is familiar to most people, and computer-generated productions have often tracked both the performers and an actual on-set camera to create a virtual camera position which might be used to frame the final shot once a visual effects sequence is rendered. Very similar – or even identical – devices can be used to track cameras, and maybe other objects, for virtual production.

Probably the most commonly-encountered approach to any sort of motion tracking is for an array of cameras – sometimes called witness cameras, to differentiate them from the taking camera – to observe markers. Markers might be simple reflective spheres or patterns such as circular or 2D barcodes. Cameras might be distributed around the outside of the studio to observe markers moving around on the studio floor, often termed ‘outside-in’ tracking, or mounted on the camera itself to view markers placed around the studio, called ‘inside-out’ tracking.

Established Technology

None of this technology is unique to virtual production, and many systems are sold as suitable for virtual studio, virtual production, and other 3D graphics work. Inside-out tracking has often been used in applications such as broadcast news studios, allowing virtual objects to appear in a real space. Studios with circular barcodes on the ceiling, often with upward-pointing witness cameras mounted on each studio camera, have been common for over a decade.

Variations include systems which display tracking markers on the virtual production display itself, using the time between frames, when the shutter of the taking camera (whether physical or electronic) is closed. In person, the tracking pattern appears superimposed on the virtual production image. Electronically, the witness cameras can see the tracking markers, while the taking camera can’t. The benefit of this is that many virtual production facilities have a large sweep of LED video wall which creates an area where it can be difficult to place physical tracking markers, potentially compromising the accuracy of the results.

Regardless the configuration, material from the witness cameras will be routed to a computer that’s responsible for interpreting their images and calculating positions, which will then pass that position data on to the rendering system. Many purpose-built motion tracking cameras use Ethernet computer networking, to support a large number of cameras with minimal cabling; the image sent is sometimes nothing more than a cluster of black pixels representing markers on a white field. Cameras may have infra-red lighting distributed around the lens – a ring light – and an infra-red pass filter. That creates a clear reflection from markers with a retro-reflective coating, much as a car’s headlights illuminate a road sign at night.

The more cameras which can see a marker, the better the track. Common systems can locate a marker in a space tens of metres across with a precision of less than a millimetre.

Lens Data

Knowing where the camera is and where it’s pointing is only part of the information a rendering system needs to draw a three-dimensional screen and display it such that it looks correct to the taking camera. It’s instinctive that a wider-angle lens, for instance, will mean displaying an image over a wider area of the wall. A zoom lens might mean that changes over time.

Similarly, the focus position of a lens, as well as its aperture setting, will change the way out-of-focus areas of the image are rendered depending on their distance from the camera. The rendering system must simulate that falloff of focus in the virtual world. All that data might be available from inbuilt encoding systems on some lenses (there are two principal standards), or from add-on devices such as a remote follow focus. Technical provisions for retrieving that information from the camera and passing it to the tracking system vary, although it has been common for this information to be used for conventional visual effects for some time.

Lens Geometry

Most people have seen an image produced by a fisheye lens, where straight lines appear bowed. While that’s most associated with very wide angle lenses, most lenses have at least some geometric distortion. Often, that’s described using terms such as spherical, barrel or pincushion distortion. Even lenses which don’t show obvious signs of geometric distortion will generally have at least some; very few (formally, perhaps absolutely no) lenses are entirely rectilinear, or without distortion. That can sometimes be demonstrated by lining up a horizontal or vertical line with the edge of a monitor.

As with many types of visual effects photography, lens geometry can be adjusted-for by shooting test charts, which might mean a chequerboard or grid pattern, allowing a computer to characterise the lens’s distortion and correct for it. While this is often effective, different software has different capabilities, and different lenses may create complex distortions that, while subtle, can make alignment difficult. Lens designers often try to correct for these distortions, and the residual distortion left after those corrections can be more complex than just continuous curves. That’s especially true for anamorphic lenses, which will have different characteristics in the horizontal and vertical dimensions. Some lenses may exhibit geometric distortion which changes as the focus is altered.

Some lenses, in some circumstances, may have characteristics so complex that they’re beyond the ability of the software to correct, potentially creating hard-to-solve alignment problems. Using those lenses in virtual production might be difficult. In general, though, one of the biggest advantages of virtual production is that vintage or unusual lenses with interesting optical characteristics are handled seamlessly. Unusual flares, glow, corner softness, or even optical filtration are a great way to tie the real and virtual parts of the scene together. So, where it’s possible to take the time in prep to make difficult lenses work, there are benefits for both the effects and camera teams.

Supported by

You might also like...

Virtual Production At America’s Premier Film And TV Production School

The School of Cinematic Arts at the University of Southern California (USC) is renowned for its wide range of courses and degrees focused on TV and movie production and all of the sub-categories that relate to both disciplines. Following real-world…

Why AI Won’t Roll Out In Broadcasting As Quickly As You’d Think

We’ve all witnessed its phenomenal growth recently. The question is: how do we manage the process of adopting and adjusting to AI in the broadcasting industry? This article is more about our approach than specific examples of AI integration;…

Designing IP Broadcast Systems: Integrating Cloud Infrastructure

Connecting on-prem broadcast infrastructures to the public cloud leads to a hybrid system which requires reliable secure high value media exchange and delivery.

Production Control Room Tools At NAB 2024

As we approach the 2024 NAB Show we discuss the increasing demands placed on production control rooms and their crew, and the technologies coming to market in this key area of live broadcast production.

Designing IP Broadcast Systems: Where Broadcast Meets IT

Broadcast and IT engineers have historically approached their professions from two different places, but as technology is more reliable, they are moving closer.