Creating, moving and storing data is central to virtually all aspects of today’s technology-centric society.
There is practically no aspect of broadcasting today that is not dependent on digital technology. Videography, post production, scheduling, presentation, delivery and conditional access are all digitally controlled. The whole edifice rests on the ability to reliably store and transmit data. John Watkinson argues that this enabling technology should not be taken for granted.
It’s ironic that the name we use for the stuff that our state-of-the-art digital systems handle should come from one of the world’s oldest languages: Latin. Data is a plural noun and the singular is datum. It meant something that is given. Donate comes from the same root.
In the New World it is “datter”; in the Old World it might be “dahta” or “dayta” (let’s call the whole thing off). However it may be pronounced, today it has an idiomatic meaning. A datum may be a reference point, like Mean Sea Level, or Black Level; it may be a point on an agreed scale of some physical variable, like temperature, sound pressure, brightness, colour difference or a subjective variable like satisfaction with an eBay purchase.
Note how important it is that the scale should be agreed. If we are to be subjected to a temperature of 20 degrees; that temperature had better be degrees Celsius, because if it’s degrees Kelvin we won’t survive. This enables us to define information, which is novel data that are meaningful to the destination. If it’s not novel, if the destination has had the same stuff before, it’s not information; it’s redundancy. If the destination doesn’t know how the data are encoded, what the scale is, they are not useful; the information is lost.
In the case of our temperature example, the destination might be able to guess it was Celsius that was meant. In contrast, encryption is a process where what is agreed between source and destination is so complicated that it has little chance of being guessed elsewhere. It’s the exact opposite of broadcasting where the goal is to reach as many destinations as possible.
Figure1 shows that the path between the data source and the destination is known as the channel in data systems.
It is both a strength and a weakness of digital technology that only discrete data can be conveyed, so in that context data are assumed always to be discrete. Etymologically speaking, digital relates to counting on the fingers, and only whole numbers can be counted in that way, whereas discrete means that a datum has to be chosen from a finite set, in the same way one chooses which step of a ladder to stand on.
Figure1. Source data are converted to a signal that is suitable for the channel by a channel coder and converted back again by the data separator. The two must agree on the coding used. Equally, the data source and destination must agree on the scale on which any data are measured. Not all data represent information. The data may be in error, they may be redundant or, if the scale or coding are not agreed, meaningless.
Data are older than digital technology and the position of a datum on many scales may be infinitely variable. Any information source that does not originate in a discrete form has to be converted on entry to our digital system, and converted back again on leaving. The information may be time variant, like an audio signal, or space variant, like a photograph or a solid body. In the first case we must make the waveform discrete by sampling in time, with a sampling frequency measured in Hz. In the second case we sample in space in two dimensions to produce picture elements, or pixels, with a sampling rate measured in a variety of units, such as dots per inch (DPI), samples per millimetre or pixels per screen width. We can also sample solid objects in three dimensions to create volume elements, or voxels.
The theory of sampling was derived independently in three separate locations. Whittaker in the UK, Shannon in USA and Kotelnikov in the USSR. The frequency of half the sampling rate is called the Nyquist frequency to commemorate Harry Nyquist.
Once the samples, pixels or voxels have been obtained, each one then has to be made discrete. The process of quantizing describes the parameter concerned by the integer number of the nearest step on a discrete scale.
The key thing to grasp about conversion of this kind is that if done properly, there need be no loss of information. Whilst true, this is not widely accepted, partly because some technical knowledge is needed to understand it, but largely because most popular explanations of sampling and quantising are simply erroneous and fall apart when exposed to critical thinking.
The discrete samples, pixels or voxels of the source data are converted into discrete symbols which are more suitable for the specific type of channel. As symbols are discrete, they can be described by channel bits. On receipt, the conversion needs to be reversed; again there must be agreement about the conversion between source and destination. The process is called channel coding and is the discrete equivalent of modulation schemes in analog systems such as gamma, AM and FM. The channel coding stages can also be seen in Figure 1.
The discrete nature of digital information has been made clear, but why does it need to be like that? It’s very simple, but fundamental, that any real world receiver or device playing some kind of medium will not obtain a perfect replica of the original signal. Instead, the channel will superimpose on the ideal signal noise, that randomly alters the size of the signal, and timing errors, which we call wow and flutter in audio, time base error in video and jitter in data channels. There may also be distortion of the signal if there is non-linearity in the channel.
If the signal is continuous in time and continuously variable, it is not possible to remove these disturbances because they are unpredictable. However, if the signal is discrete, there is something we can do. A signal that is discrete in time is expected to send symbols at a certain rate, with a constant time between each one. If the time for which each symbol is present is longer than the typical amount of time instability, the symbol can be sampled somewhere in the middle where the instability doesn’t affect the value.
Equally if the magnitude of the symbol is discrete; if it can only have a certain number of levels; and if the gaps between the levels are larger than the typical amount of noise, the correct level can usually be decided despite the noise. The number of levels in the signal is usually described by the letter m, so such a signal is described mathematically as m-ary. As the noise gets worse, the number of levels we can reliable discern goes down, and the limiting case is where m = 2 and the transmitted signal uses binary coding. The device that tries to reject the jitter and to identify the symbol despite the noise is called a data separator, which can also be seen in Figure 1.
Binary coding is eminently suitable for use with recording media, which only have to be recorded in two states. The punched cards used for programming street organs and knitting machines either had a hole in a certain place or they didn’t, and it could be detected using compressed air. The Hollerith punched card used in early computers was developed from that technology. In magnetic media, the domains in the recording can be aligned N-S or S-N. In optical recordings such as CD, DVD and Blu-Ray, the recording either does or does not reflect light from a laser. In flash memory, an insulated well is or isn’t charged up.
A single symbol having one of two values is a binary digit, abbreviated to bit. The bit is the indivisible fundamental unit of information. Unlike the atom, which turned out to be divisible, the bit really cannot be divided. The sole purpose of a bit is to be a one or a zero and it has no other attributes whatsoever. Audiophile bits made of solid gold still contain the same amount of information; they just cost more, which benefits the snake oil vendors.
As each symbol only contains one bit in binary systems, the symbol rate will be high. The penalty for working with a low signal-to-noise ratio is that high bandwidth is needed.
Suppose we have an m-ary system in which m = 4. The four possible combinations are fully described by the combinations of two bits. As each symbol now carries two bits, the symbol frequency can be halved. The noise in the channel will also have to be halved, or reduced by 6dB, so that four levels can be discerned instead of two.
Information theory can be understood by thinking of a quantity of information as being two dimensional; having an area. The x-axis is the bandwidth and the y-axis is the relative signal to noise ratio. Figure 2 shows that we can trade one against the other as long as the area remains constant. That trading is the process of channel coding. Recording media tend to use a small SNR and lots of bandwidth. Terrestrial transmission systems try to save bandwidth by using a better SNR and a larger value of m. Satellite systems fall in between.
Figure 2. A modulation scheme can change the combination of bandwidth and SNR required by the information. If one goes down, the other must go up as the product must remain the same if information is not to be lost.
The use of discrete symbols resists noise and jitter, but cannot overcome it absolutely. This is because impairments like noise are statistical. Noise doesn’t have amplitude; it has a distribution of amplitudes and occasionally the noise will alter the symbol and cause an error. Making the symbol bigger only makes that happen less; it doesn’t rule it out.
Here we find another massive advantage of discrete symbols. A binary number can only be a one or a zero. As a result it can’t be a little bit wrong. It is either absolutely correct or absolutely wrong. And if it is absolutely wrong the state only needs to be reversed and it becomes absolutely right again. This raises the possibility of error correction. The correction process is trivial; the hard part is reliably determining which bits are wrong.
Whilst the way in which error correction systems work is a story in itself, the results are striking. Instead of discarding media with slight defects, error correction reconstructs the data lost in the defects. As errors due to noise can be corrected, we can work with lower SNR in the channel, increasing the range of DTV transmitters, lengthening the battery life of cellular phones, and shrinking the size of recording media. That’s one powerful enabling technology.
John Watkinson, Consultant, publisher, London (UK)
Other John Watkinson articles you may find interesting are shown below. A complete list of his tutorials is available on The Broadcast Bridge website home page. Search for “John Watkinson”.
You might also like...
A discussion of camera sources, contribution network and remote control infrastructure required at the venue.
It was ten years ago, in the fall of 2012, that NBCUniversal opened a new international broadcast center in Stamford Connecticut, as the home for NBC Sports. It served as a way to consolidate its growing employee base and the production…
A discussion of how to create reliable, secure, high-bandwidth connectivity between multiple remote locations, your remote production hub, and distributed production teams.
An examination of how to plan & schedule resources to create resilient temporary multi-site broadcast production systems.
We discuss the roll out of ATSC 3.0 in the USA with Jerald Fritz, Executive Vice President for Strategic and Legal Affairs at ONE Media 3.0 - part of Sinclair Broadcast Group.