It seems clear that there is such a thing as the “film look”. But how did it come about?
Other articles in this series and more series by the same author:
The film look is a subject that is raised regularly, so there is no doubt that there is something to it, but that is a long way from being able to say how it came to be. One can see historical, technical, economic and cultural influences in movies, but is one of those factors dominant?
Without any doubt movies are cultural and take a place alongside, for example, literature.
There are a number of things in common. Movies and books both rely on a medium, a plastic film or a sheet of paper, neither of which has much to do with the subject of the movie or book. Novels and most movies are a form of storytelling and cinematography has found ways of shooting that helps or guides the audience to enhance the story. Shallow depth of field is a good example. Some of the film look must stem from the use of those techniques.
Moviemakers, like authors, have come from countless different backgrounds, having different nationalities, different political views, different ethical standards, different imaginations, yet despite that huge variability, the film look will still be there. That suggests that the film look is somewhat independent of the subject of the movie and of the moviemaker. The cultural influence doesn’t explain what we see. There is, however, another cultural effect, which is the way cinematographers think and work and I suspect that the commonality of the approach there does have an effect.
As the oldest moving image technology, film has a long history and much of what happens today is still based on decisions taken many decades ago. As is often the case, such decisions were often taken in the absence of any theoretical basis, as the theory had not caught up with the new medium. In order to understand where we now are, it is necessary to look at how we got here.
Until electronic technology took over, movie cameras worked in exactly the same way the whole time. The lens would form an image that was recorded on the stationary film and then the film would be advanced by one frame and stopped again so the next image could be recorded. The film was perforated so that the camera could move it accurately by inserting pins in the holes. A rotating shutter was synchronised to the film so that light was blocked whilst the film was moving.
The angle of the shutter that was cut away determined the length of the exposure when the film was not moving as a fraction of the frame period. Two factors led to the adoption of low frame rates. The first was purely economic. The cost of the film was directly proportional to the frame rate. The second was that the relatively insensitive film chemistry of the day required a long exposure, which could only be obtained with a low frame rate. A frame rate of around 16-18Hz was typical.
The difficulty of making mechanical parts that would operate at such speeds was probably not a factor as metals with a suitable strength to weight ratio were available from, for example, clock making. Many cameras were hand-cranked, which meant that speed stability was somewhat lacking, but as there was no soundtrack this did not matter very much.
Projection of such movies required something similar to the camera, with a pull down mechanism and a shutter to block the light when the film was moving. However, the interruption of the shutter caused the screen to go dark for a significant proportion of the frame period and the result was powerful flicker. This was highly visible as the human visual system (HVS) does not cease seeing temporal changes until around 50Hz.
The solution adopted at the time, and which was retained for as long as films were projected, was to put more blades on the shutter so that the flicker frequency was increased. Three blades on the shutter would make a 16Hz movie flicker at 48Hz, which was acceptable. The actual frame rate didn’t change; instead the film was pulled down during only one of the three interruptions in the light, meaning that the same film frame was put on the screen three times.
Early movies were silent and dialog was handled by the use of captions. Technical progress led to the development of an optical soundtrack alongside the film frames that could be read by a photocell. The sound was displaced along the film by a standard distance so that the film in the gate could stop and start whereas the film in the sound pick up could run at steady speed. For the first time it became necessary to standardize the frame rate so the sound would be reproduced at the correct pitch.
The frame rate chosen was 24Hz, which turned out to be the most enduring frame rate ever adopted. But where did it come from? There are a couple of factors involved. Firstly if the linear film speed is too slow, the wavelengths on the optical soundtrack will be too short for the pick up to resolve them, compromising high frequency sounds. Secondly, and I am grateful to Mark Schubin who researched this one, it transpires that although movies were shot at 16-18Hz, unscrupulous theatre owners were cranking the projectors faster so they could get more screenings per day and thus make more money.
Today when ever we see silent movie clips, things always seem to happen in a rush, which somewhat proves the point. Speeding up motion often makes it funny, and that was used in the Sennett’s Keystone Kops, which went even further by deliberately removing frames from the film to make the movement jerky.
In a sense 24Hz was the de-facto standard at which silent movies were shown and it was simply adopted for the talkies. Clearly Sennett’s tricks could not readily be applied to a movie with sound, so the talkies ended the era of the Keystone Kops.
Raising the frame rate to 24Hz meant that projectors could use a two-bladed shutter and the flicker frequency would be 48Hz. That approach prevailed for a long, long time. One fact that stands out is that the choice of 24Hz for a frame rate was based on absolutely no understanding of the human visual system whatsoever. What we now call psycho-optics was essentially non existent. The information wasn’t there because the research had yet to be done.
24Hz continued to be used as cinematography made the transition to digital technology.
Digital cinema was mostly about piracy. The movie was delivered to the cinema and stored on hard drives as a strongly encrypted data file. The decryption was in the projector and only the designated projector could play the file. The opportunities to steal or copy the movie were greatly diminished. That was the priority and little else changed. The main difference is that an electronic projector can update the image essentially instantaneously, so there is no equivalent of pull down and no need to black out the screen. There is then little flicker and no requirement to double-project each frame. Each frame is shown for 1/24 of a second and then replaced.
Given its background it would be surprising if 24Hz represented some kind of optimum, and of course it doesn’t. When television was developed, substantially higher picture rates were adopted. In view of that it is probably fair to say that 24Hz is inadequate as a frame rate and if that is accepted then it follows immediately that a great deal of the film look probably stems from the various ways cinematographers have found over the years to deal with and minimize the impact of that inadequacy.
One point it is vital to grasp is that we still don’t have any moving picture technology. What is presented to the moviegoer and the television viewer is a series of still pictures that are replaced at the frame rate. The motion is an illusion that exists entirely in the minds of the audience as the best interpretation the HVS can come up with regarding what it is being shown. Studying how movie cameras and projectors work can only go so far in explaining how movies work because the all important illusion of motion takes place after the projector has presented the pictures.
I remember my own efforts to find out how movies worked. I read a book that had all the usual stuff about cameras and projectors and finally stated that the viewer saw movement because of a process called fusion. Giving something a name is not quite the same thing as explaining it and the reader was none the wiser.
Since then I have had rather more success in understanding motion portrayal and how it contributes to the film look. Part 2 will go into the subject in some detail.
You might also like...
Having looked at the traditional approach to moving pictures and found that the portrayal of motion was irremediably poor, thoughts turn to how moving pictures might be portrayed properly.
At the heart of virtually every IP infrastructure and its inherent IT network is a software layer that acts like a conductor to make sure the system is working smoothly. Some call it the orchestration layer because it instructs each…
This is the third of a multi-part series exploring the science and practical applications of RF technology in broadcast. Here we focus on things to consider when planning systems, how to tune transmitters and monitoring requirements.
When conventional VFX are produced, there’s often a real-world lighting reference available. That approach can be used in virtual production, but increasingly, the director of photography might want or need to have some pre-production involvement in the development of a…
As remote production, multi-site teams and content storage & delivery all increasingly rely on cloud-based infrastructures, the technology required to build cloud-based systems is maturing at a rapid pace. ‘The Cloud’ is literally going to be everywhere at this years…