Being able to detect errors and then correct them is central to today's digital world.
Digital Media: John Watkinson argues that from an information standpoint they are all the same, but completely different in practice.
Before digital storage, legacy media always suffered noise, distortion, dropout, crosstalk, time base error, generation loss and so on; the media all had qualities that had to be compared in all of these ways. As different types of signal had different sensitivities to these problems, it followed that each type of signal would need a medium optimized for it. Such dedicated media would also reproduce information such as sound and moving pictures with an agreed time base that was built in to the equipment.
Legacy media all suffered from generation loss. A copy of a tape, a photograph or a film was always inferior to the original. This meant that recorders intended for professional use, where there must be several generations to allow editing and replication, had to be better than necessary so that after generation loss they would still be acceptable. Consumer equipment would be intended to be acceptable for one generation.
Digital media also have qualities, but they are quite different. Any competent digital medium will have an error correcting system that is appropriate for the sensitivity of the data. For all practical purposes the data that are reproduced are the data recorded. Thus digital media suffer no generation loss; they have no signal quality.
In the absence of generation loss, there is no requirement for a digital medium to be better than necessary and it can therefore be more compact than its forerunners. The distinction between consumer and professional is also blurred.
The quality of digital audio is determined entirely by the ADC that created the data; the sampling rate, the word length and the linearity. It would not make any difference to the sound quality if the data were to be recorded by an army of trained woodpeckers instead of on a hard drive.
As various combinations of sampling rate and word length allow signals such as audio, stereo, SDTV, HDTV and so on to be converted to data, the only difference between them is the data rate. Provided the data rate can be supported, a digital medium does not care what the data represent; provided that any implicit time base is faithfully reproduced on delivery. The result is that dedicated media are dying out.
The ubiquitous hard drive has an error correction system intended for the recording of computer code, which has the most stringent error requirement. For all other types of data it is better than necessary.
As data are discrete, they can temporarily be stored in memory on replay. The readout from the memory can be controlled to arbitrary accuracy so that any time base disturbances due to the medium are eliminated. The same is true of data transmission networks such as the Internet. Data transmission can wait for another day as I propose to concentrate on storage for the present.
The performance qualities that digital media have include the cost per bit, the access time, the transfer rate and the density, which means the number of bits stored per unit area or volume. In addition there are subsidiary qualities such as robustness, life span and ease of replication.
Were there one universally superior data storage technology, all of the others would have died out, but this has not happened. Instead there are a number of technologies having advantages and drawbacks. The best technology for a given application is one where the drawbacks are most acceptable.
Figure 1. In the rotary head system shown at a) discontinuous tracks are laid down by the scanning head. Rotating the medium instead produces circular tracks as shown at b).
In magnetic recording, whether using discs or tapes, the medium as manufactured is completely uniform and featureless. Whatever structure, known as a format, appears on the medium is entirely down to the writing device.
A television picture is, or was, created by the process of scanning; the format of a magnetic medium is created in the same way. A writing magnetic head scanned across a blank medium leaves behind it a track. Some mechanical process causes the next track to be written parallel to the previous one.
There are three basic ways in which magnetic media can be scanned. In the first, a magnetic tape is driven past a fixed head that will have one or more magnetic circuits in it so that various numbers of parallel tracks can be written. The audio tape recorder and its descendent the Compact Cassette worked in this way, along with early computer tape decks.
In the second, shown in Figure1a, the head or heads are mounted on a rotating assembly that scans the tape as it moves past and produces tracks of finite length. According to the geometry, the tracks may be short and almost transverse to the tape, or may be sloping and thereby longer. The first practical video recorders used transverse scan, but transitioned to helical scan when the mechanical difficulties were overcome.
Tape has the advantage that it can be rolled up onto a reel for compact storage, but that results in the disadvantage that the tape has to be wound to-and-fro to locate the wanted data and the access time would be extended. That led to the third way of scanning: the rotating magnetic memory. Initially the surface of a drum was coated with magnetic material, but later one or more discs were used as shown in Figure1b.
Rotating storage has the advantage that the entire data surface is permanently exposed and access to it will be correspondingly quicker. In early drum stores, there was one head for each track on the drum and the access time could not exceed one revolution.
Early disk drives evolved with fixed heads, but it was soon realized that a single head that could be positioned to various radii would be more economic. In that case the access time would be the sum of the latency due to the head positioner and the rotational latency.
In the early days of digital computers, the sheer cost meant that the processor had to be shared between multiple users. The cost of memory was such that it was impractical for all users programs and data to be memory resident. The time-sharing computer evolved to work with a storage device such that the memory needed to hold only the current program, and the next one; in the same way that a juggler has only two hands. Once the next one was being executed, the storage device would replace the previous program with the one after that.
The system could only work if the storage device was fast enough. The disk drive was the technology of choice and received steady development that continues to this day. The emergence of the personal computer led to the demise of time sharing and caused an explosion in the market for disk drives and consequent economy of scale. High performance tape drives would still be needed for back-up and archiving.
Figure 2. The early longitudinal process a) recorded the medium from one side, whereas the later vertical recording b) drives flux right through the data layer and returns it via a permeable substrate.
It is inherent in the process of scanning that the data must be recorded serially along the track. In order to recover the data, a head must logically locate the correct track and then follow it accurately enough that an adequately strong signal is obtained from the wanted track and crosstalk from adjacent tracks is minimized.
Figure 2a shows a conceptual recording head of an early type in which the medium completes a magnetic circuit that is mostly in the head. The recording is made as the medium recedes from the trailing edge of the gap and hysteresis allows it to remember the last magnetic polarity to which it was exposed.
The magnetic grains in the medium are anisotropic and when the medium is made, they are aligned with the direction of motion of the medium. This is known as longitudinal recording and all early tape and disk drives operated in this way. Binary recording leaves the grains magnetized left-right or right-left.
For economic reasons, there is constant pressure to record more data in a smaller area. The amount of data per unit area is called superficial density. This means that the magnetized features on the surface of the medium must get smaller and smaller. The tracks must get narrower and the recorded wavelengths must get shorter.
Below a certain size, random thermal energy reaches the same order as the energy needed to move a domain wall, which means that a magnetic area can spontaneously flip and cause an error. The obvious solution is to use media having higher coercivity. However, it is a characteristic of longitudinal recording that the penetration of the recording into the medium goes down with wavelength and this reduces the volume and the magnetic energy.
Figure 2b shows the principle of transverse or vertical recording, in which some of the magnetic circuit is in a permeable substrate below the active layer of the medium. As before, the recording is made at the trailing edge of the gap, but the presence of a soft magnetic layer below the active layer allows the head to create a magnetic field that passes through the active layer vertically. The grains in the active layer are oriented vertically in manufacture.
A binary recording now leaves the active layer recorded down-up or up-down. Modern hard drives work in this way. The superficial density has risen, but the volume and thus the energy of the magnetic feature is maintained by turning it on end.
Part 1 of this three-part series, Data: Sampling Principles, can be found at this link.
Part 2 of this three-part series, Data: Principles of Storage, can be found at this link.
John Watkinson Consultant, publisher, London (UK).
Other John Watkinson articles you may find interesting are shown below. A complete list of his tutorials is available on The Broadcast Bridge website home page. Search for “John Watkinson”.
You might also like...
In part 8 of the series “Data transmission and storage”, consultant John Watkinson looks at some of the intricacies of RF transmission.
Part one of this four-part series introduces immersive audio, the terminology used, the standards adopted, and the key principles that make it work.
John Watkinson introduces the idea of channel coding to convert the uncontrolled characteristics of data into something that works within the limitations of real media.
NASCAR Productions, based in Charlotte NC, prides itself on maintaining one of the most technically advanced content creation organizations in the country. It’s responsible for providing content, graphics and other show elements to broadcasters (mainly Fox and NBC), as w…
This year’s Super Bowl LIII telecast on CBS will be produced and broadcast into millions of living rooms by employing the usual plethora of traditional live production equipment, along with a few wiz bang additions like 4K UHD and a…