As demands increase for every more compact and faster storage, so too does industry technology to satisfy those needs.
One measure of the performance of a data storage device is the cost per bit. Simple economics forces a constant improvement in density that allows more data in the same space. John Watkinson looks at what the limits are and to see which are fundamental and which can be pushed back.
In a recording having tracks, such as an optical or a magnetic disc or a magnetic tape, the number of bits per unit area (the superficial density) is determined by the linear density (the number of bits per unit track length) and the number of tracks per unit distance. As these figures multiply it is easy to see that improving both at the same time can have startling results. For example a 40 percent improvement in both doubles the superficial density.
It is not enough that the physics allows very narrow tracks if the pickup cannot be registered with them. High-density data recording relies on some extremely accurate track following technologies that will be considered in future articles.
In general the dominant limit to linear density in storage media will be an aperture effect. We have seen that a conventional magnetic head contains a gap between the two poles and essentially sums the signals from the two poles, one of which must be a delayed version of the other. Such a head acts like a comb filter. In the case of vertical magnetic recording, there is only a single effective reading pole, but it has finite size.
Figure 1. The frequency response of a magnetic channel is far from optimal, which is one reason why analog magnetic recording is so difficult. The dominant limit is the null due to aperture effect of the head.
The recording is essentially convolved with the rectangular aperture due to finite width of the pole or the gap between the poles. Convolution is the time domain process, whereas in the frequency domain we would take the Fourier transform of the rectangular aperture. The FT of a rectangle is our old friend the sinx/x function that crops up in displays, ADCs and in loudspeaker directivity. Figure.1 shows what the frequency response of a magnetic channel will look like. The response exhibits periodic nulls.
In analog magnetic recording, the highest useful frequency that can be recorded is somewhat below the frequency of the first null. However, in digital recording, we have more modulating/coding ability and can create a recorded spectrum extending up to the null.
Figure 2 shows (top) that a single magnetic transition (a reversal of magnetic polarity) in isolation generates a Gaussian-ish pulse in an inductive replay head. The next transition will produce a pulse of opposite polarity. If those pulses are integrated, a bandwidth limited and noisy replica of the record waveform will emerge as seen at the centre of Figure 2. In a magneto-resistive head the integration is not needed. The integrated signal is sliced, or converted to binary and a version of the original record waveform emerges as evident in the lower trace of Figure 2.
Figure 2. As a conventional inductive replay head is a dynamo in miniature, the output (top) depends on the rate of change of flux and so is the differential of the recorded signal. If the replay signal is integrated, (centre) the effect is opposed. The original binary signal can be recovered by slicing the integrated waveform (bottom). Slicing is a form of quantizing.
As recorded, the channel bit rate is constant and the data encoding determines whether, at each channel bit, there will or will not be a transition. The recorded waveform contains various periods, all integer multiples of the channel bit period. The number of channel bits for which there is no transition is called the run-length. If an oscilloscope were set to re-trigger on any transition, it would superimpose on the screen a number of waveforms of varying run-length that is called an eye-pattern, see Figure 3. The eyes repeat at the channel bit rate.
The data separator in the replay channel takes a great deal of notice of the eye pattern. A phase-locked loop changes the phase of a channel-bit-rate clock so that the replay signal level will be sampled in the horizontal centre of each eye. An automatic gain control keeps the height of the eyes sensible constant and a slicer places a threshold, essentially a quantizing level, half-way up the eye.
Figure 3. The eye pattern seen on an oscilloscope results from superimposition of many individual traces. Noise causes the height of the eyes to reduce whereas jitter and ISI (inter-symbol interference) cause the width of the eyes to reduce. The data separator must phase-lock lock to the eye pattern so that a binary decision can be made in the centre of each eye.
The result is that at the centre of every channel bit a binary decision is taken whether the replay waveform is above or below the threshold of the slicer. This is classic box-car detection, so called from the analogy with shooting across a railroad when a train is passing by firing between the boxcars.
The binary waveform emerging is a near-replica of the recorded signal, free of the replay noise and jitter, unless on occasion they are too great in which case the binary signal will be in error. In modern devices the replay waveform may simply be fed to an ADC and all of the data separation takes place in a processor.
A large number of magnetic media work in that way; credit card stripes, metro and airline tickets, for example. High density recording pushes beyond that. If the transitions are now recorded closer and closer together, they will interact. This is known as inter-symbol interference (ISI), which causes the eyes in the eye pattern to get smaller and increases the probability of error.
As ISI cannot be prevented, one solution to the problem is to work with it. If the replay pulses are equalized in a certain way, there will be complete interference between adjacent replay pulses, but no interference elsewhere. If adjacent transitions are of the same polarity, the sum will be positive or negative. If they are of opposite polarity, there will be complete cancellation; the sum will be zero. As the interaction is known, the data can be decoded.
Figure 4a shows that the output signal will display an eye pattern having two eyes, one above the other, due to the addition of the third signal levels. The data separator must now have two thresholds and output a ternary signal, giving rise to the name of partial response.
If a very simple pre-coder is used, as shown in Figure 4b, the replay signal is given the interesting characteristic that if the odd and even channel bits are considered separately, symbols occupying the outer levels will always alternate. In other words in an odd bit stream, if a particular channel bit after slicing displays the highest level in the ternary signal, then next channel bit that does not have the centre level will occupy the lowest level.
Figure 4a. Where there is cancellation between adjacent transitions, a third level appears in the reply signal and the eye pattern has two sets of eyes.
Figure 4b. Using this pre-coder, outer levels in the odd or even data streams always alternate, which is a form of redundancy.
This is a form of redundancy and it allows an effective improvement in the signal to noise ratio that was diminished by the smaller eyes of ternary coding. If, due to noise, the value of one of the channel bits is uncertain, the uncertainty can be resolved by choosing the state that respects the alternating sequence. The process is called Viterbi detection, also known as Partial Response Maximum Likelyhood (PRML) detection and whether it should be considered a form of noise reduction or a form of error correction is moot.
In the case of optical discs, the recording density limit is primarily determined by optical physics. The replay of an optical disc is essentially the same process as that of a scanning microscope. Microscopes cannot reveal features smaller than their limiting resolution. In an article about lenses I discussed diffraction limiting and that is the dominant effect in determining the size of the point spread function of a lens.
The lens, naturally enough, suffers an aperture effect. As there is no such thing as negative light, we can only consider the intensity of the light when the lens convolves it. The squaring process required to obtain intensity results in the sinx/x of the magnetic response changing to the sin squared x/x squared of the optical response. The frequency response of optical pickups is triangular and falls to a null, beyond which there is no response at all.
The intensity function shown in Figure 5 is called an Airy disc after the then Astronomer Royal who discovered it. A sombrero-like shape, it has a central peak where most of the energy resides, surrounded by alternating dark and light rings. Such a function doesn’t have a diameter, so when spot sizes in optical discs are quoted, it will typically be the diameter at which the intensity has fallen to half of the peak.
Figure 5. The Airy disc is the result of an optical aperture effect. All optical media are limited by this effect.
Unlike magnetic recording, in which the track spacing and the linear density are determined by different factors, the circular light spot of the optical disc determines both at once. Figure 5 shows that the track spacing is generally such that the adjacent track will be in the first dark ring in order to minimize crosstalk.
The spot size is a function of lens quality, lens aperture and light wavelength. In practice the lenses are made so accurately that it is only the aperture and the wavelength that matter. Progress in recording density in optical discs has primarily been made as developments in lasers allowed a reduction in the wavelength of the light used. The Compact Disc uses light of 780nm wavelength, which is in the infra-red region, whereas in Blu-ray the wavelength has fallen to 405nm which human vision would say was violet. The lens aperture cannot be increased indefinitely as it becomes harder to maintain focus.
Magnetic recorders are not diffraction limited and their superficial density comfortably exceeds that of optical discs. However, optical discs remain popular because they have two significant advantages over magnetic discs. The first is that the optical disc can readily be replicated by pressing. The second is its resistance to contamination. The data layer is on the far side of the disc from the pickup. Light enters the disc over a large area and the disc can still be read in the presence of contamination. As a result, optical discs can survive being removable by the consumer, whereas magnetic discs have to be sealed in the drive.
John Watkinson, Consultant, publisher, London (UK)
Other John Watkinson articles you may find interesting are shown below. A complete list of his tutorials is available on The Broadcast Bridge website home page. Search for “John Watkinson”.
You might also like...
The first burst error correcting code was the Fire Code, which was once widely used on hard disk drives. Here we look at how it works and how it was used.
The CRC (cyclic redundancy check) was primarily an error detector, but it did allow some early error correction systems to be implemented. There are many different CRCs but they all work in much the same way, which is that the…
The mathematics of finite fields and sequences seems to be a long way from everyday life, but it happens in the background every time we use a computer and without it, an explanation of modern error correction cannot be given.
Computer marketing departments typically do not promote all company products. Rather they focus on high margin products.
Here we look at one of the first practical error-correcting codes to find wide usage. Richard Hamming worked with early computers and became frustrated when errors made them crash. The rest is history.