Broadcast For IT - Part 11 - Sensors
In this series of articles, we will explain broadcasting for IT engineers. Television is an illusion, there are no moving pictures and todays broadcast formats are heavily dependent on decisions engineers made in the 1930’s and 1940’s, and in this article, we look at the most fundamental element of video – sensors.
The first image gathering devices used in television were based on cathode ray tubes working in reverse. Patents stretched back to the 1920’s but it was Philo Farnsworth’s Image Dissector that provided the first transmitted image in 1927. Zworykin’s Iconoscope followed in 1931 with EMI improving on the design to provide the Super-Emitron shortly afterwards. RCA developed the Image Orthicon which was in main stream use from 1946 to the late sixty’s.
A range of variations on the tube continued development, including the Plumbicon, Saticon, Trinicon, until their eventual demise with the large scale up-take of CCD sensors in the 1980’s.
Cameras Remain Unchanged
The fundamental operation of a camera hasn’t really changed over the years, the only real change has been the type of sensor used.
In early tube cameras, a lens on the front of the camera focuses the image onto the tubes photosensitive plate and a scanning electron beam reads the image. The resulting beam current is proportional to the brightness of the screen giving rise to the video signals.
Vacuum Tube Technology
Electromagnetic coils placed around the tube provide the horizontal and vertical deflections needed to trace the electron beam on the back of the faceplate to read the image. A heating element was also needed in the base of the tube to supply the source of the electrons, like the glow seen in old valve radios.
Tubes have varied enormously in size throughout their development. Early designs measuring up to 18” long (45cm) with the more modern Plumbicon tubes measuring 6” (15 cm). They were far from portable and required a great deal of maintenance and support to keep the camera working reliably.
The introduction of color provided even more challenges as three tubes were needed to represent the red, green, and blue channels. A dichroic block was placed between the lens and the tubes to filter the light into the three channels. Each tube was sensitive to its own color to provide the best color rendition.
Each tube had its own scanning coils and associated drive electronics giving rise to camera registration. The line and field synchronizing pulses made sure the three electron beams were synchronous, but their displacement was slightly different relative to each other. The resulting picture had red, green, and blue images all laid over each other but slightly offset and differing in size.
High Power Electronics Needed
To counteract this the camera had many adjustments to change the linearity, size, and position of each tube’s electron beam.
Up to the 1980’s, cameras relied extensively on analog electronics. The voltages developed over the scan coils could easily reach two or three hundred volts necessitating the use of high power driver circuits. Temperature, humidity, physically moving a camera would cause the electronics to drift resulting in registration errors and color imbalance.
At the start of each shift, the broadcast engineer would line up all the studio cameras. This was a time-consuming task that relied on great skill and experience. Periodically, throughout a transmission or recording, the cameras would drift, and the broadcast engineer would have to line up the cameras again.
Battery Packs Quickly Ran Down
Portable cameras provided even greater challenges as the physical environment they were used in did not help the analog electronics, especially in news gathering environments where the conditions could be hostile. Large, heavy, cumbersome, battery packs were needed to power the cameras. A belt pack may of only lasted twenty minutes.
Charge Coupled Devices (CCD) started to make headway into broadcasting during the 1980’s having three major advantages, they were portable, reliable and relatively inexpensive.
Based on silicon technology, the CCD started development in 1969 at Bell Labs. The light sensitive layer is exposed to an image resulting in a charge being formed proportional to the brightness of the scene. The charges are then moved into a shift register within the same silicon where they are no longer exposed to the light. And then read into an analog to digital converter to provide a digital representation of the scene.
No Registration Issues
Replacing the tubes, CCD’s were bonded directly to the dichroic block during manufacture. The result was an image that always had perfect registration. No field adjustments were required, and the three-sensor system was virtually maintenance free.
CCD’s fundamentally differ from tubes as they use a matrix system of image gathering instead of relying on the electron beam scanning. This is analogous to how a cine film camera works requiring some creative electronic shuttering to be adopted.
Diagram 2 – The Bayer filter was used by single CCD cameras and relied on die’s being applied directly to the CCD sensor so the RGB signal could be derived.
All the heavy-duty power electronics needed to drive the scan coils were dispensed with. As these were a major source of power consumption, adoption of the CCD’s greatly reduced the amount of power needed to keep the camera working. Battery power packs went from lasting for minutes to hours – a major win for news gathering.
Bayer Filters
CCD’s became more sensitive and higher resolution as their technological development improved with the next major change being the single CCD camera.
To divide the image color into red, green, and blue, each pixel on a single CCD was coated with either a red, green or blue die. A specific layout of the colors to weight sensitivity to green was provided by Bryce Bayer with his Bayer Filter 1976 US patent.
Single CCD cameras no longer needed the dichroic block, so the design was greatly simplified, making reliability and power consumption better. As CCD’s are manufactured on silicon production lines, the quality and yield rates are very high, making them much more cost effective than tubes.
Tubes For HD?
Tubes were used in some experimental HD cameras but by the time HD was gathering momentum CCD’s had taken over. It is technically possible to manufacture a 4K tube, but the costs, size and power consumption would make it a non-viable solution. But CCD’s keep improving in resolution and sensitivity so lend themselves well to HD, 4K, 8K and beyond.
Although camera sensors have gone through a major technology change from tubes to CCD’s, the operational ergonomics of cameras has largely stayed the same. Cameras are now incredibly reliable and a fault is the exception rather than the norm.
David Austerberry talks about camera development further in his article Is There An Optimum Shape for a Camera?
You might also like...
NDI For Broadcast: Part 1 – What Is NDI?
This is the first of a series of three articles which examine and discuss NDI and its place in broadcast infrastructure.
Brazil Adopts ATSC 3.0 For NextGen TV Physical Layer
The decision by Brazil’s SBTVD Forum to recommend ATSC 3.0 as the physical layer of its TV 3.0 standard after field testing is a particular blow to Japan’s ISDB-T, because that was the incumbent digital terrestrial platform in the country. C…
Broadcasting Innovations At Paris 2024 Olympic Games
France Télévisions was the standout video service performer at the 2024 Paris Summer Olympics, with a collection of technical deployments that secured the EBU’s Excellence in Media Award for innovations enabled by application of cloud-based IP production.
HDR & WCG For Broadcast - Expanding Acquisition Capabilities With HDR & WCG
HDR & WCG do present new requirements for vision engineers, but the fundamental principles described here remain familiar and easily manageable.
What Does Hybrid Really Mean?
In this article we discuss the philosophy of hybrid systems, where assets, software and compute resource are located across on-prem, cloud and hybrid infrastructure.