In this series of articles, we will explain broadcasting for IT engineers. Television is an illusion, there are no moving pictures and todays broadcast formats are heavily dependent on decisions engineers made in the 1930’s and 1940’s, and in this article, we look at camera lenses, why, and how we use them.
The most primitive lens available is that used on the pinhole camera dating back thousands of years. But it wasn’t until the seventeenth century that it gained its name the “camera obscura”. Technically this isn’t a lens but an aperture, as it is just a very small hole – the size of a pin head, or smaller.
Pinholes suffer from two very basic restrictions; they have infinite depth of field and they need very long exposure times of many seconds or minutes. Increasing the size of the aperture causes the image to blur as light reflected or sourced from an object does not distribute parallel light.
Depth Of Field Is Powerful
In broadcasting, film, and photography, depth of field is a powerful phenomenon used by program makers to create interest in an image and draw our attention to it.
If a director is shooting a wide establishing shot of a city scene then they will use a long depth of field. This means all points within the image are in focus allowing the viewer to subconsciously scan the whole frame.
When the director wants us to look at an actor talking then they will use a shallow depth of field causing the face to be in focus but allow the background and foreground to be out of focus. Subconsciously, our attention is drawn to the actors’ face.
Diagram 1 – Both of these pictures were taken on the same camera. The picture on the left has an aperture of f-5.6 and gives a shallow depth of field drawing the attention to the face. The picture on the right has an aperture of f-36 with a deep depth of field and keeps the background in focus encouraging the eye to move over the image.
As the frame rate of a standard or high definition system is either 25 or 29.97 frames per second, the exposure time is between 33ms to 40ms. This falls well outside the exposure time of a pinhole camera and so a method of concentrating more light onto the sensor is required.
Camera Lens Solve Problems
The camera lens solves two problems; it delivers enough light to the sensor, so it can provide a viewable image of a specific frame, and it gives artisans creative control over the depth of field.
A lens refracts light to bend and concentrate it onto a plane. By doing so we resolve our first challenge, that is to increase the amount of light appearing on the sensor. But a lens might deliver too much light. To control the amount of light reaching the sensor we use a variable hole between the lens and the sensor called an aperture in film and photography, and an iris in television.
Camera lenses are made of several individual elements. And each of the elements is either a convex or concave glass lens.
Iris Affects Depth Of Field
Coincidentally, changing the size of the aperture affects the depth of field. A small aperture will give a long depth of field, and a large aperture will give a short depth of field. The camera operator or cinematographer now has two new tools in their arsenal; focal length and aperture size.
A lens only has one point of focus. It doesn’t matter whether it’s a composite lens pack or a single convex. The concept of depth of field arises from the fact that light entering a lens is not parallel as it emanates from the source image in all directions. Light not parallel to the principle axis of the lens will concentrate at different focal planes, giving the appearance of out of focus images in the foreground and background.
If a short depth of field is required a large iris will be selected. Reducing the size of the iris will block light not parallel to the principle axis, causing light to focus at one plane, thus giving a very deep depth of field.
Diagram 2 – Light enters the front of the lens and the iris controls how much light reaches the sensors.
However, reducing the iris size decreases how much light focuses on the sensor resulting in a darker image. In broadcasting we can increase the gain of the video amplifiers on the back of the sensor to make the picture brighter. Increasing gain also increases noise, in the extreme this will deteriorate the picture.
ND Filters Control Bright Light
If a short depth of field is required in bright lighting then the iris must be opened very wide, this could result in the picture being overexposed, and we can’t fix overexposure electronically as the photosensitive cells in the sensor have peaked at their limit. The solution is to reduce the light reaching the sensor by using neutral density (ND) filters.
ND filters reduce all colors in visible light equally to decrease the amount of light reaching the sensor. They are usually mounted inside the camera between the lens and the sensor, and a dial on the side of the camera selects how much filtering is required.
Depth of field is a function of the lens focal length and size of iris. To allow a common scalable system of measurement appropriate to all lens, the “f-stop” was defined, a ratio of the lens focal length and size of aperture or iris.
F-stop numbers are inversely proportional to the size of the aperture. An iris of f1.2 has a very large aperture and shallow depth of field. And f-stop of f22 has a very small aperture and long depth of field.
The focus control on a camera allows us to focus on images at varying distances from the camera. Without it we would only be able to focus on an object at a fixed distance from the camera making operation virtually impossible.
A variable focal length lens is referred to as a “zoom lens” and changes the angle of the field of view of the lens. A long focal length provides zooming into distant objects to make them appear bigger. And a short focal length provides a wide angle of view for large panoramas or images close to the camera to make them appear smaller or fit into the frame.
Early cameras used fixed focal length lenses. A turret on the front of the camera housed three fixed focal length lenses. If the camera operator wanted to make an image appear larger they would have to select a longer focal length by rotating the turret on the front of the camera. As technology progressed zoom lens were developed with motor controls to allow the camera operator to zoom in and out of a scene by pushing the zoom-in or zoom-out controls on the camera.
Camera lens are the first point of contact for shooting a scene and they must be designed to very high specifications to keep distortion and aberration as low as possible. And with the uptake of high dynamic range, wide color gamut and 4K/8K, there has never been so much interest in camera lens. But these requirements come at a price and a high-spec lens can easily cost more than the camera itself.
You might also like...
NASCAR Productions, based in Charlotte NC, prides itself on maintaining one of the most technically advanced content creation organizations in the country. It’s responsible for providing content, graphics and other show elements to broadcasters (mainly Fox and NBC), as w…
New England Patriot quarterback, Tom Brady, entered Mercedes Benz stadium in Atlanta, GA on February 3rd having already won five Super Bowl games. And through four-quarters of play, all delivered by a television crew of hundreds of technicians, sports casters…
Super Bowl 2019 will raise the bar for live broadcasting technology with innovations in augmented reality (AR) and use of at least one 8K camera, while also highlighting past innovations that have fallen out of favor.
Like many professional football players themselves, CBS Sports Lead television director Mike Arnold tries to treat the Super Bowl as he would a regular season game, calling the same shots and camera angles—albeit with many more cameras at his d…
During Super Bowl LIII, the football action will be on the field. But a lot of the action will be enhanced by incredible new graphics, some virtual, that CBS is using to super charge the screen.