VR Filming: A Guide To Compromise

Virtual reality may be all the rage, but acquiring 360° degree video is all a matter of compromise. Depending on the project and the budget producers and VR camera operators will find themselves quickly having to trade off image quality against manoeuvrability against the cost of stitching video feeds in post. The Broadcast Bridge talked with Paris based cinematographer / stereographer Thomas Villepoux to get a handle on cinematography, camera and rig choice for VR projects and his responses are enlightening for anyone experimenting in the medium.

A word about Thomas before we read his interview: For VR supervision, Villepoux works mostly in France, with the companies DVMobile, Cow Prod and Digital Immersion. DVMobile does a lot of work for luxury brands, 360° and 360° 3D.

He is also working with a more traditional production company in France, Seppia, on two projects for France Télévision and ARTE (the first one is Le Goût du Risque). So according to Villepoux, French TV is going VR. With ARTE building their own player in 360°.

BroadcastBridge: What main criteria are most important to you in selecting a VR camera array?

Thomas Villepoux: Well some of the criteria are obvious, for example: robustness. You don't want your cameras to wiggle or shake or the stitch will be impossible. It's easy to achieve with a GoPro rig, but more complicated with bigger cameras.

Then it's all a matter of compromising and finding the right equipment for the job. I like to work with professional cameras and good lenses, but it makes a huge and heavy rig. For a drama, with fixed shots on a tripod or heavy grip, I would try to go for a rig with Red Epics. But for a rig that you need to stick on a skydiver, for example, I would be stupid not to go with GoPros.

You need to compromise between:

  • Image quality (includes cameras, lenses and recording quality)
  • Sensitivity
  • Size and weight
  • Parallax (the bigger the cameras, the bigger the parallax so it depends if you are shooting on a small space or in the open)
  • Practical equipment (for example, will you be able to see what you do while you're shooting?)

BroadcastBridge: Is output resolution important - given the low resolution of current headsets?

TV: Resolution is important and there is a common misconception.

People will say 'My GoPro is shooting 4K and I have 7 of them so my rig is 28K...'. For ages we have been measuring the resolution of images in terms of pixel numbers. It was convenient because the image was projected on a fixed sized screen or TV.

But if you are interested in lenses resolution you know that the lenses manufacturers measure the optical resolution and publishes Modulation Transfer Functions (MTF). It has more to do with the angle between two distinguishable points.

I'll make it simpler: Your GoPro is 4K. You are shooting your Sunday hike with it and the view from the top is stunning. You are shooting with the regular GoPro lens. Then your buddy, who is fond of 360° shooting, lends you his GoPro with the '180° new fisheye' he bought online. Well, the resolution of his GoPro is lower. Apart from the lens quality that can be different, he is capturing on the same sensor a bigger view. So if you look at your house, that looks so tiny from up there, there will be less pixels to describe it on his GoPro than there will be on yours. He is covering more ground with the same amount of pixels so he has less details.

It doesn't matter if you project the image on your TV because then, your house, that has less pixels on his image than on yours, will be smaller on his image (he has a wider view). So in the end, you will see a smaller house with less pixels, which gives the same impression of resolution.

When you shoot 360°, the house will always be the same size, because we deal with angles and not pixels. An image that comes from a regular GoPro will cover 120°, around 1/3 of the 360° but an image given by a modified GoPro will cover 180°, about 1/2 the 360° image. So with the regular GoPro you have a resolution of 4K for 1/3 of the image and with the modified GoPro you have 4K for half the image.

And don't think you can earn resolution by adding cameras. It doesn't work that way. If you add more cameras, you will have more overlap and your stitching will be easier. But the details you have on your house will still be the same. Each GoPro will see your house with the same amount of pixels. Even if multiple GoPros can see the same house, it will not give you any more details.

So this is another compromise:

  • If you go for wider lenses, less camera, you will have a smaller resolution
  • If you go for a better resolution, you will need more cameras, longer lenses

The GearVR and Oculus cannot really play back above 4K images. I would say 4K for the equirectangular (the horizontal display of a 360-degree stitched image) is both the maximum and the minimum. Remember that the device plays an equirectangular image that may be 4K but you will see no more than 1/4th of this image at a time, so the viewing resolution is close to 1K...

However, I suggest trying to shoot with more resolution than 4K. Because of two reasons:

  • Oversampling is always a good idea. Especially when we know what some cameras manufacturers understand by 4K
  • New headsets are coming really fast with a bigger resolution. Check http://www.starvr.com for a glimpse. It's a good idea to future-proof your contents
Thomas Villepoux

Thomas Villepoux

BroadcastBridge: How would you explain the importance of parallax and how much this can differ in VR capture?

TV: The principle of today's 360° shooting (tomorrow's will be different) is to shoot with multiple cameras in all directions and stitch the resulting images together. You use the overlap area between two adjacent cameras to align the images in a software and stitch them.

The two images you are trying to stitch are shot from a different point of view. So the perspective they have on your scene is different. Practically, the relative position of objects that are not at the same distance of the camera (for example, your house in the valley and your buddy in front of you trying to make his GoPro work) will be different on the two images. That is the parallax effect. If you try to align the background between the two cameras, the foreground will be shifted. If you try to align the foreground, the background will be shifted. You can only have a perfect stitch at a given distance from the camera. Often, the background is the reference.

The further your cameras are from one another, the bigger the parallax effect is. Sometime I see people sticking one GoPro on their forehead, two on the sides, and on on the back. What are they expecting to do ? The parallax will be huge for close objects.

It's another compromise :

  • If you go for less cameras and wider lenses, you will probably have bigger parallax. So stitching areas will be problematic and stitch will be harder to do.
  • If you go for more cameras, you can achieve smaller parallax, so you will have an easier stitch and you can have closer objects in the stitching areas. But you have more stitching areas. So you choose between a few big problems, or a lot of tiny ones!

Parallax is the big problem for stitching. That is why we are working on rigs using mirrors to reduce the parallax for bigger cameras.

Headcase VR rig on set of The Strain

Headcase VR rig on set of The Strain

BroadcastBridge: GoPros are hugely popular, cheap and robust, but what issues are there working with them in VR -- is the microSD card a good media? how difficult is it to genlock the cameras?

TV: GoPros are hugely popular because let's be honest, they offer a tremendous image quality for a very low price.

Our problem is that they are not professional cameras. There is a lot of practical and technical problems, bugs, heating issues, connectivity issues... We are desperately waiting for a 'pro' version of the camera, more stable, with access to the camera settings, better compression, etc...

The battery is a huge, huge problem. We had to deal with that our own way and stop using the GoPro battery.

Our answer to the GoPro problem is: when we shoot with GoPros, we have a full spare rig. When one camera starts going rogue, switching to still camera mode or something, we just change the full rig and take time to solve the problem.

The microSD is a good media to me, if you have tiny hands and the small surgical pliers. The compression is the problem. MicroSD cards now have a very high bitrate, they could record more data on them. Remember that you are working with amateur equipment. So be prepared on your data wrangling station with a six-microSD car reader, several cards sets, etc...

According to my understanding, it is not technically possible to genlock the GoPro Hero 4. That means you will have time-related disparity on your stitching areas if your images moves too much. It's especially critical for extreme sports. You will see the background shaking from one camera to another. That is why most people tend to shoot 100fps, to reduce the sync problems, even if they downgrade to 60fps or 30fps. Again, waiting for the next version to have the genlock.

and oh yes ! tip : Don't use the WiFi – ever!!

BroadcastBridge: How important is the angle at which cameras are mounted for stitching in post?

TV: The angle is not that important. The parallax is important. If you put 90° between your cameras, you will have more parallax than if you put 60°. It's a matter of distance.

Of course, as long as you have enough overlap areas.

BroadcastBridge: How do you work withdepth of field in VR?

TV: The fact that most people are shooting with GoPros made us believe that in VR, everything was in focus. This is a problem because if two of your cameras have a different focus, how can you stitch them? But if you put the same focus distance on all cameras, what happens if you have one very close actor on one side and only background on the other side? Obviously it's way easier to shoot VR with a large depth of field, so we mainly work with small sensors cameras. But we lose quality and sensitivity in the process.

All those problems, we have to find a way to use them in a creative direction. When using bigger sensors, with open iris, the focus may be used to direct the attention of the viewer, for example. We are doing experiments with that but we still have so much to learn

You might also like...

Brazil Adopts ATSC 3.0 For NextGen TV Physical Layer

The decision by Brazil’s SBTVD Forum to recommend ATSC 3.0 as the physical layer of its TV 3.0 standard after field testing is a particular blow to Japan’s ISDB-T, because that was the incumbent digital terrestrial platform in the country. C…

Broadcasting Innovations At Paris 2024 Olympic Games

France Télévisions was the standout video service performer at the 2024 Paris Summer Olympics, with a collection of technical deployments that secured the EBU’s Excellence in Media Award for innovations enabled by application of cloud-based IP production.

HDR & WCG For Broadcast - Expanding Acquisition Capabilities With HDR & WCG

HDR & WCG do present new requirements for vision engineers, but the fundamental principles described here remain familiar and easily manageable.

What Does Hybrid Really Mean?

In this article we discuss the philosophy of hybrid systems, where assets, software and compute resource are located across on-prem, cloud and hybrid infrastructure.

HDR & WCG For Broadcast - HDR Picture Fundamentals: Color

How humans perceive color and the various compromises involved in representing color, using the historical iterations of display technology.