Compression is almost taken for granted despite its incredible complexity. But it’s worth remembering how compression has developed so we can progress further.
In the days of AM and FM radio, PAL and NTSC television, the viewer/listener could improve the experience by obtaining a higher quality receiver. Even with the best receivers, signal quality could be impaired by interference, although FM radio was exceptionally immune.
Digital broadcast signals are not like that. The use of digital techniques such as modulation schemes and error correction mean that there is substantially no loss of information between the transmitter and the receiver. However, information will be lost in the compression codec, specifically during the encoding process. The precision of the digital transmission is a mixed blessing when it faithfully carries the compression artifacts of the encoder to the viewer. The quality obtained is pre-determined by the bit rate made available and the viewer cannot improve it with better equipment.
Digital broadcasts also differed from earlier technology in that the bandwidth of analog TV was fixed, whereas the bit rate of a compressor is variable. One could visualize a form of arm wrestling over the bit rate control, with engineers trying to turn it up and accountants trying to turn it down. This was eventually resolved by firing the engineers.
The experience of DAB (digital audio broadcasting) in the UK is salutary. The system was rushed into use using an early codec that did not offer a high compression factor. With an adequate bit rate it sounded fine. Then the marketing decision was taken to incorporate more audio channels into the multiplex to give the listener more choice. Unfortunately this required the compression factor to be raised. The sound quality cratered and the advertising claims that DAB offered CD quality had to be withdrawn. At the time of writing FM radio is still alive and well in UK. Possibly the worst mistake of DAB was that the receivers had a fixed decoder instead of one that could be updated. As a result when the much-improved DAB+ was launched, the earlier DAB receivers became obsolete.
It seems that there are a number of basic forces driving the use of compression. Some are internal; some are external. Traditional analog television standards were very simple because they had to be implemented with vacuum tubes and this led them to be relatively inefficient from the standpoint of information theory. At the time it did not matter, but with the growth of other demands on the RF spectrum, such as from cellular telephones, the bandwidth needs of traditional television started to look excessive and the result was pressure on broadcasters to adopt more efficient solutions in order to liberate bandwidth for other uses.
That was true of standard definition television, but at the same time broadcasters wanted to enhance their product by moving to higher definition along with a wider aspect ratio. The fundamentals of analog television did not favor that kind of change. Doubling the number of lines in the picture requires the bandwidth to be quadrupled. Making the picture wider as well meant around five or six times the bandwidth of SDTV. Such an analog standard was never going to fly and any foray into high definition would only ever be practicable with the use of digital techniques. When the CRT held sway none of that mattered, because the serious size constraints on the CRT meant that the big pictures needed to show off high definition were not possible. Flat screens and compression were the enabling technologies that made HDTV not just possible, but also practicable.
There is another force that has to be considered, which is information technology (IT). Broadcasting and IT both depend on the same technologies and both develop in step with them. Most of IT works on file transfer. Although a digital video recording is essentially a data file, quite a large one, which becomes somewhat smaller when compressed, for practical reasons it is not necessary or desirable to receive the whole file before something can be seen. All practical compression systems incorporate entry points throughout the file where decoding and viewing can start. Video delivered by broadcast over the air or over a communications network uses a process called streaming, in which the compressed video information is transmitted very nearly in real time so that it can be decoded as it arrives. The amount of storage required in the receiver is dramatically reduced.
In digital television broadcasts, the data stream is carried by radio waves to many viewers simultaneously on a time scale determined by the broadcaster and is merely a continuation of traditional broadcasting but using data instead of analog signals. When IT is involved, the data stream is carried by a network, but instead of being sent to many viewers at the same time it will specifically be sent to the individual who requested it at the time of his choosing. Strictly speaking that is not broadcasting. The compression system does not know how the data arrived and provided the decoder can find compressed data in the input buffer when it moves on to decode a new frame it doesn't much care how it got there. The video disk is simply another way of delivering data and differs from broadcast and network streaming in that it is easier to use variable bit rate. Video disks also have the advantage that the results of the compression can be assessed and if necessary improved before the disk is released.
Digital cinema has ploughed its own furrow based on different requirements. Traditional cinemas using film already had wide screens and adequate definition, even if the motion portrayal was inadequate. The main purpose of digital cinema was to combat piracy and to reduce distribution costs. There is no requirement for digital movie distribution to be in real time, as movie data are stored locally in the cinema. This means there is no requirement for high compression factors and picture quality is not compromised. The data files are encrypted for transmission to the cinema and remain encrypted in the cinema data storage. The decrypter is in the projector. Encryption systems that send the same data on various occasions can be attacked and in digital cinema this is avoided by making each projector unique and encrypting the data only for that projector. In that way transmissions to different movie theaters are unique and cannot be compared as a form of attack.
Digital cinema, video disks, broadcast and network delivery have in common that they are means of final delivery of post-produced material to the viewer. Compression is best suited to applications of that kind and has been highly successful. Unfortunately compression and production steps are incompatible to a great extent and the higher the compression factor the greater the incompatibility becomes. It is not possible to perform production steps on compressed video and it has to be decoded beforehand. If it is then re-encoded there will be generation loss because video compression is not lossless. If the compression factor is moderate, the generation loss can be contained. This has given rise to compression systems that are intended for use by broadcasters prior to production steps. On account of the low compression factors these are often called mezzanine systems.
We expect and find strong similarities between compression techniques used in broadcast, networks and disks, with different approaches used in television production and in digital cinema. There is no single ideal compression system. Instead there are compression tools, which are selected and combined to produce systems that are, or should be, closely tailored to actual requirements. The approach taken in this series will be to consider compression tools individually, before attempting to combine them in real systems.
The history of image compression is remarkably short and the rate of progress has been breathtaking. The performance obtained relies on three foundations; knowledge of the human visual system, knowledge of information theory and mathematics and knowledge of microelectronics that allows these fantastically complicated coding schemes to be put into practice at consumer prices. So successful has the technology of video compression been that it is almost at the point of being taken for granted. The average television viewer neither knows nor cares what is going on in those chips inside their TV. The technology has evolved to look after itself and in doing so has become almost invisible.
Much of that success comes from a strong cooperation between physicists, who seek to learn how nature behaves, and technologists who seek to make useful devices, including new tools that help physicists explore further.
That level of success is not accidental, and it contrasts strongly with other areas of human endeavor where problems abound and are apparently not being solved. Not only is that level of success heartening, but it also suggests a model that might usefully be applied elsewhere. The fundamental strength seems to be reliance on first principles that are provable rather than on dogma. Trying to force nature to act the way you want is doomed and the failures are all around.
You might also like...
A discussion of how to create reliable, secure, high-bandwidth connectivity between multiple remote locations, your remote production hub, and distributed production teams.
Exciting new types of on-premise and cloud-based feature film and episodic television production and post workflows are now being experimented with and deployed at Amazon Studios’ recently opened virtual production stage, dubbed Stage 15, in Culver City, Calif.
IP is an enabling technology, not just another method of transporting media signals. Consequently, it is giving broadcasters the opportunity to reconsider how we build live television workflows and infrastructures.
Flexible architecture opens new business possibilities.
Here we introduce the different types of redundancy that can be located in moving pictures.