Working with 8mm and Super 8mm reels of film represent both a technical and workflow challenge for documentarians.
Content producers often prefer to shoot or record original content. Documentarians, on the other hand, typically must rely on material recorded by others that is often stored on film stock, Regular 8mm and Super 8mm being common formats. Working with older technology is a challenge requiring special techniques.
To this point the questions asked, and hopefully answered, have not dealt with the technical choices that need to be made when specifying the type of transfer you want performed by the lab you plan to use. The fundamental question is how much information can be extracted from a film frame? And, what type of transfer will accomplish this goal?
The latter question focuses on the way Regular 8 and Super 8 frame-rates can be matched to video frame-rates. The former question involves the image resolution and chroma sampling to which film will be transferred. Part 2 of this article will deal with these technical challenges. Part 1 of this series can be found here.
Matching Film to Video Frame Rates
In all cases, except with 24fps to 24p, a Film-frame to Video-Frame transfer will result in a motion speed-up. Depending on the capabilities of your editing software there are three potential solutions available to prevent this speed-up. Each method has its strengths and weaknesses.
Frame Sampling: Software outputs a frame at the video frame-rate. At each point where a frame is output the software grabs the nearest in time film frame. Figure 1 schematically illustrates how 16 film frames (upper row) might be converted to 30 video frames (lower row).
In this drawing the gray frames represent video frames that might contain a copy of the previous film frame (red = late) or a copy of the next film frame (green = early). The choice of which film frame, late or early, is dependent on the timing between the beginning and end of each film frame in relation to the beginning and end of each video frame.
Each of the gray frames, by creating a pair of identical video frames, creates judder. Although the converted video cadence may be rough, each output frame will be a clear film frame. Figure 2 shows 18fps converted to 30fps. It illustrates how 18 film frames (upper row) might be converted to 30 video frames (lower row). Again, each of the gray frames, by creating a pair of identical video frames, creates judder.
Figure 3 shows a possible, worst case, distribution of judder frames after conversion. The hypothetical presence of so many judder frames predicts the cadence will be very rough.
Frame Blend: This is a version of the prior solution except that when a video frame needs to be output that lies between two film frames, the video output is a blend (yellow) of these two frames. While cadence will be smooth, many video output frames—depending on the amount of motion in each film frame—will be blurred. The greater the difference between frames the greater the blur. Figures 4 and 5 show 16fps and 18fps converted to 30fps using frame blending.
Optical Flow: This technique inputs a series of frames that undergo motion analysis that generates a set of motion vectors. Based upon these vectors, software creates output frames on a different time-scale where each output frame (yellow) is composed of pixels that will be in their predicted location.
While this very time-consuming technique can produce excellent results, optical flow can sometimes generate artifacts. Nevertheless, it is especially useful when conversions are the most difficult—when frame rates are quite close. Ideally your NLE should use your computer’s GPU for this process. Even using a GPU, real-time playback will require rendering. Figure 6 shows 24fps converted to 30fps using frame interpretation.
The fundamental quality question is how much information can be extracted from a film frame. This question can be answered by tests or by a little mental effort. We know 35mm negatives are transferred to 2K digital intermediates. Four strips of 8mm film will fit within a 35mm frame, so each strip would have slightly less than SD resolution. Even with a 4K digital intermediate, each 8mm strip will have less than 1280x720 resolution.
So why do labs offer 2K/FHD and 4K/UHD transfer resolutions? As I learned five decades after shooting 8mm film when I decided to create a video—it was now an HD world. I found the quality loss of an SD upscale to HD—even using an entirely digital path—proved to be too great. Figure 7 shows my solution. I first upscaled SD video to 960x720 and then centered the result within a 1920x1080 black matte.
By transferring film to 2K/FHD, there may well be no increase in image detail, but 2K/FHD will not need to be upscaled during your edit. A 4K/UHD transfer can be mixed with 4K/UHD video. Or, it can be employed within a 2K/UHD production as a pan, scan, zoom, or reframe.
To test this concept, I transferred one cartridge of 50 ISO Super 8 film to both FHD and UHD. These transfers were done at no cost by Pro8mm in Los Angeles. (A big thank you to Rhonda Vigeant.) I shot using my Canon 814 (Figure 8) at the same locations in Paris 50, years to the month after I shot at the same location with my Bolex H8.
Because I would be color grading the transfers, I requested a One-Light scan rather than a more expensive supervised Scene-to-Scene scan. See Figure 9.
The FHD and UHD transfers of 12-bit 4:4:4 RGB data from Pro8mm’s film scanner was made to ProRes 4444. ProRes 4444 can carry 12-bit data so the only loss of quality would be from the compression of the RGB components. Transfer to ProRes 4444 certainly can be considered “high quality” transfer.
Two other FHD transfers were made of a second Super 8 cartridge—one to ProRes 422 HQ (10-bit) and the other to a DPX file. A DPX file is composed of a series of uncompressed pictures—one for each scanned video frame. See Figure 10.
These transfers were made to check the transfer quality of Super 8 film to the 10-bit 4:2:2 “normal quality” of ProRes 442 HQ and to the “super quality” DPX format.
Part 3 of this article series will examine what difference, if any, can be seen among the different quality transfers. The primary comparison will be made between the ProRes 4444 transfers. Will a quality difference be found, for an HD edit between the HD transfer and the UHD transfer? And, how will a 2X digital zoom from UHD into HD look?
Part 3 will also cover what you must tell your lab about how you want Super 8 and Regular 8mm “framed” during transfer: Over-scanned 16:9, 4:3 within 16:9, or 16:9. This decision will determine how you must treat your transfers during editing.
Editor’s note: You may enjoy these other Steve Mullen articles, listed below. A full list of his tutorials can be found by searching for his name from The Broadcast Bridgehome page search box.
You might also like...
Until now, 4K/UHD and high dynamic range (HDR), in many ways, has been little more than a science project, as manufacturers have struggled to convince production entities of the long-term practicality and viability. Fears of overly complex pipelines and…
This article concludes a three-part series on color grading products and technology. There are both hardware and software-based systems in all varieties of sophistication and cost. Key is first understanding your needs, then find a solution to match.
Color grading may be one of the most processing intensive special effects in post production, but many call it the “unseen VFX”. In the first installment of this three-part series we looked at its current state because, when done properly, the…
When done properly, color grading may be the most resource-intensive production process done today. Even so, the results are often amazing.
Broken is a six-part TV drama series, created by screenwriter Jimmy McGovern that first broadcast on BBC One. In this special interview, Patrick Hall, Head of Post at Liverpool producer LA Productions explains the main post workflow.