Virtual Production For Broadcast: Lighting Tools For The Virtual World

When conventional VFX are produced, there’s often a real-world lighting reference available. That approach can be used in virtual production, but increasingly, the director of photography might want or need to have some pre-production involvement in the development of a virtual world. The job may be familiar, but the tools are likely to be new.

Camera and lighting people who are used to lighting real sets are sometimes a put off by the idea of lighting a virtual one. After all, there’s no such thing as an 18K HMI or a SkyPanel in the virtual world – although once we look a little closer, we discover that there are some pretty close equivalents.

Often, virtual production specialists – the computer people – will fill the role of a grip and electric team. Those terms vary somewhat in meaning on either side of the Atlantic, but either way, the software used to create virtual worlds is often capable of emulating most of the fundamentals of lighting equipment and procedure found on film and TV sets.

Not every production will use a large, complex, custom-built three-dimensional environment. Where live-action material is part of the virtual environment, a combination of both conventional camerawork, compositing and grading, and 3D world building might be involved. Either way, most virtual environments will need at least some lighting to create an appropriate look and match other live-action footage, such as the foreground elements of the virtual production studio shoot. Creatively, the people responsible for generating the virtual world will consult with the production’s director of photography. It’s understandable that might create some uncertainty for someone whose experience lies with practical, real-world lighting tools and, often, a particular team of people.

Fidelity & Performance

It would be a mistake for a director of photography to become too concerned with the mathematics underlying lighting in computer-generated imaging. The details are either handled automatically by the software or by the specialists involved; it’s their job to work with a cinematographer on that cinematographer’s own terms as much as possible. Even so, an understanding of the trade-offs between flexibility, realism and performance can make good results more accessible.

Rendering realistic three-dimensional scenes in real time tests the limits of what modern computers can do, and it’s normal for software to use lighting simulations that look highly realistic without being a precise mathematical simulation of the real world. Recent developments have enormously improved the accuracy and flexibility of lighting, mitigating those compromises to some extent. The need for real-time performance still means some degree of approximation, and those approximations often come with requirements, such as a restriction on whether objects or lights can alter position, colour, or brightness in real time, how far each light projects across the world, the behaviour of reflected and refracted light, and special situations such as cloud, haze or smoke.

The Basics

Virtual production relies on technology developed for video games. Three-dimensional graphics of this type were possible from perhaps the 1970s onward, though real-time rendering only became possible in arcade games and home computers from the late 80s and early 90s. Real-time results good enough to look anything like real are mainly a phenomenon of the late 2010s, depending on the subject.

Most of these systems represent objects using triangular polygons, chosen because any shape defined by any three points can only ever be a flat plane (for the same reason a tripod is always stable, even on rough ground, while four-legged tables might need a wedge under one leg). Early systems assigned each triangle a colour and plotted it on screen, though simple lighting was quickly added. Designating a point in space as a light source allows the code to calculate the angle between any polygon and the light to control brightness – the surface looks brightest when it is pointing directly toward the light. Repeat that over the polygons describing an object, and the object reacts somewhat correctly to light.

That was cutting edge in the late 70s, but it doesn’t allow objects to cast shadows unless other techniques are used to approximate them, sometimes called shadow mapping, which essentially paints certain parts of the object with dark colours to simulate shadowing. Those shadows can be calculated during the design phase of the process, so accurate, attractive results are possible. That works fine until the object or the light moves.

Even with pre-calculated shadows, light still doesn’t reflect between objects; a white object next to a red object will not pick up any reflected red light. That requires global illumination (GI), which simulates light reflecting repeatedly between objects, and can look very highly realistic. Again, certain types of GI can be calculated during the design stage and effectively painted onto objects, and again, that creates caveats around what aspects of the scene can change. GI can demand a vast number of calculations for a large number of points across the surface of an object as light diffuses from that surface.

Types Of Light

The types of light simulated vary between pieces of software, but most will be recognisably similar to the options discussed here, and can approximate many common film and television lighting tools.

Point lights broadly simulate a single light bulb in space, while directional lights will have similar behaviour, albeit restricted to a cone with a definable angle and potentially a variable falloff from the centre to the edge of the beam, somewhat like a Fresnel light. However, because both types of light are, in theory, infinitely small, they will usually create completely sharp shadows by default. A real Fresnel, while far from a soft light, has a real world size and will often create at least something of a soft-edged shadow depending how far it is from the subject. It’s possible to simulate a soft-edged shadow using one of a few different techniques, from the crudest approach of simply blurring the shadow to much more sophisticated and accurate simulations.

Creating really large soft lights requires an area light, which has a controlled size in the virtual world and can accurately simulate the way soft lights illuminate objects and cast shadows. The earliest approximations of area lights created them using a large number of small, individually low-powered point or directional lights distributed across the surface of the area light. More recent techniques are more sophisticated, but it’s easy to see how area lights usually create a much higher workload for the computer than point or directional lights.

Other types of light might include ambient light, which is assumed to illuminate all objects in the world regardless of their position. Ambient light can help simulate the general illumination of, say, an overcast sky, although because it is directionless, it risks creating a flat, overlit result. Most software now provides more sophisticated ways of simulating sky light which can use some of the more advanced lighting models we’ve hinted at to create very convincing lighting environments. Sometimes, this kind of light might be based on a 360-degree image of a real or computer-generated environment.

Optimisations & Approximations

Calculating certain kinds of shadowing and global illumination in real time has only recently become practical. That allows things to move, but it’s often necessary to nominate which objects and lights need to change during real time rendering, not the whole scene. Most current software can use a combined approach, where the shadows and highlights which fall on objects which won’t move from lights which won’t change can be pre-calculated. Meanwhile, objects and lights which must move and change can be rendered in real-time, and the two solutions combined. The assumption here is that concentrating computer power on things which must move and change will create the desired effect while maintaining workable performance.

Hybrid solutions are sometimes possible, where calculations for shadow and reflection are made for a single light and kept separate from the calculations made for other lights. This can allow the brightness and colour of an individual light to be altered, though not position, beam angle, falloff, or other settings which would change how its light falls on the world. Significant improvements in the ability to perform (or at least closely approximate) the more accurate types of lighting in real time have recently made it possible to reduce reliance on less-flexible pre-calculated lighting. The specifics will depend on the exact nature of the scene, what the scene is required to do, and how the cinematographer wants to light it.

Tools For The Cinematographer

Because of its reliance on technology generated for the vast market of video games, it’s likely that the quality, variety and performance of lighting techniques for virtual worlds will continue to improve over time. With virtual production in general still a fairly new idea, the interaction between cinematographers and virtual production lighting is still being explored. It seems likely that best practices will arise when camera and virtual production specialists each learn something of what the other needs and wants, a situation which will be familiar to practitioners of such a collaborative artform as filmmaking. Some virtual production facilities have gone so far as to have their lighting specialists visit film sets and shadow the crew to improve their understanding of film and TV working practises, which seems likely to improve that collaboration.

In the meantime, modern virtual production systems are already capable of realistic lighting and lighting-adjacent techniques such as mist and fog, so it should be clear that tools to allow cinematographers to bring convincing and appropriate lighting to virtual worlds are already well-developed.

You might also like...

Designing IP Broadcast Systems: Part 3 - Designing For Everyday Operation

Welcome to the third part of ‘Designing IP Broadcast Systems’ - a major 18 article exploration of the technology needed to create practical IP based broadcast production systems. Part 3 discusses some of the key challenges of designing network systems to support eve…

What Are The Long-Term Implications Of AI For Broadcast?

We’ve all witnessed its phenomenal growth recently. The question is: how do we manage the process of adopting and adjusting to AI in the broadcasting industry? This article is more about our approach than specific examples of AI integration;…

Next-Gen 5G Contribution: Part 2 - MEC & The Disruptive Potential Of 5G

The migration of the core network functionality of 5G to virtualized or cloud-native infrastructure opens up new capabilities like MEC which have the potential to disrupt current approaches to remote production contribution networks.

Designing IP Broadcast Systems: Addressing & Packet Delivery

How layer-3 and layer-2 addresses work together to deliver data link layer packets and frames across networks to improve efficiency and reduce congestion.

Next-Gen 5G Contribution: Part 1 - The Technology Of 5G

5G is a collection of standards that encompass a wide array of different use cases, across the entire spectrum of consumer and commercial users. Here we discuss the aspects of it that apply to live video contribution in broadcast production.