We call them hard disks to distinguish them from floppy disks. As the latter have practically died out as a result of progress in solid-state storage such as flash memory, it is probably not necessary to specify that disks are hard anymore, but tradition is a powerful thing.
The history of the hard drive is a subject in itself; it may help to understand the origins of these useful devices that are now ubiquitous in broadcasting where disk-based file servers form the heart of edit suites and playout systems.
Long ago, when Pontius was a pilot, there were computers that used memory based on magnetic cores. These were tiny ferrite rings that could be magnetized this way or that way to store one bit. Even tinier wires were threaded through the rings to magnetize them and to sense the results.
Early core stores were hand knitted, later on, machines threaded the wires, but with one core needed per bit, this technology was never going to be inexpensive. Moore's law had some way to run: solid-state storage was even more expensive and was limited to applications such as cache memory. Did I say solid state? That was a marketing term dreamed up to distinguish semiconductors from the earlier tube-based technology. Today it's about as superfluous as specifying that disks are hard.
The high cost of processors meant that they often had to be shared between multiple users, each of which would get a slice of time. Making a processor swap between user programs is easy, but what wasn't easy was storing all the programs in a way that would make the swapping rapid. Magnetic tape had been tried and we have all seen those period Sci-Fi movies where we know we are looking at a computer because we can see the tape reels moving back and forth. Very photogenic and very slow.
But tape, although inexpensive, is too slow for swapping. What was needed was a recording device with faster access than tape but lower cost per bit than core memory. That is where rotating storage fits in.
Fig.1. Using a rapid access device such as a hard drive, only two programs need to be held in processor memory, the one currently being executed and the one about to execute. The rest are held on the drive, making the system appear to have more memory than it really has.
Fig.1 shows how a resource-sharing computer uses a rotating memory to support multiple users. The program belonging to the user currently using the processor is memory resident, and the program belonging to the next user is being loaded into another memory page from rotating storage. All other programs are on the rotating memory. It's a bit like a juggler who can juggle seven balls with only two hands.
Rotating memory has much faster access than tape because it is only necessary to wait one revolution for all of the data on a given track to go by. Sufficient capacity required the use of multiple tracks. Early rotating stores had a magnetic coating on the surface of a drum. Each track had its own head, which allowed rapid switching between tracks, but increased the cost.
There are two reasons to spin the medium as fast as possible. The first is that the rotational latency, the time it takes for the data to come around, is reduced. The second is that the transfer rate is raised. With conventional magnetic recording technology, there is a limit to the head-to-medium speed that can be tolerated before head life becomes a problem. There were three breakthroughs in rotating storage. The first was that a stack of disks had a larger surface area than a drum of the same dimensions. The second was that the head should not be in contact with the medium but should instead operate with a small air gap. In that way there is no wear mechanism and practically no speed limit. The third was the economy obtained by moving one head from track to track across the disk surface.
Experimental heads were made that used a supply of compressed air, making them like tiny hovercraft, but it was later discovered that the heads could be made to fly on their own using the boundary layer of air adjacent to the medium.
Inter-molecular forces cause the air in contact with the disk to move with it. Fig.2 shows that the air velocity follows a falling profile as a function of height above the disk. The head is on a flexible arm that pushes it towards the disk. As the lift on a head rises with airspeed, the velocity profile allows the head to find a constant flying height. If the height is too great, the airspeed and the lift both fall. If the flying height is too small, the airspeed and the lift both increase. This self-adjusting mechanism allows the head to fly up and down over ripples and warps in the disk, like a terrain-following aircraft in miniature.
Fig.2. The height/velocity profile of a boundary layer showing that the velocity increases if the head comes closer, and the additional lift pushes it up again.
The flying height of the head in a hard drive is very small indeed and this makes the mechanism very sensitive to contamination. The smallest speck of dust picked up by the head will act like a spoiler and make it fly lower and hence more likely to contact the disk. With the high head to disk speed, the resultant head crash is catastrophic. Nevertheless, attention to cleanliness with adequate air filtration led to a reliable technology.
Note that unlike audio and video disks, which have continuous spiral tracks, hard disks have discrete tracks arranged concentrically so that a stationary head repeatedly scans the same track.
The first hard disk containing these breakthroughs was IBM's RAMAC, which had a stack of several disks on a common spindle, but only one pair of heads, which could be elevated to the required disk and moved radially to select a track. Although it worked, it was found that the process of moving the heads from one disk to another caused excessive delay. The solution was to have one head per disk surface, and to move them radially with a common actuator. That became the standard layout for the moving head disk drive for decades to come.
Magnetic recording requires data to be recorded in blocks not least so that error detection and correction may be used. The block, which may contain hundreds of bytes, is the basic data quantum of the hard drive and a whole block must always be written. Large files must be broken up into a series of blocks and if there are insufficient data to fill the last block, it will be zero stuffed. If it is desired to edit some small part of a block, it will be necessary to transfer the whole block to memory, make the edit there and re-write the whole block. This is the so-called read-modify-write process that is inherent in all block-based storage.
Fig.3 shows the logical structure of a hard drive. The surface to be used is selected by selecting one of the heads using a head address. The track is the path laid down on one surface by a head in a fixed location. Typically, all of the heads are adjusted to be at the same radius, so the tracks that are laid down by a set of heads lie on the surface of an imaginary cylinder. The radial location of the heads is known as the cylinder address.
Fig.3. The terminology of a disk pack. Every combination of cylinder, sector and head addresses uniquely specifies one data block.
The angular rotation of the disk pack is divided into sectors, just as a round of cheese is cut into wedge shapes. The sector address determines which wedge will be selected. The part of each track that lies within one sector is a block. Each block has a unique combination of cylinder, sector and head address. The process of moving the heads radially to the desired cylinder address is called a seek. The process of scanning the disc blocks on a track until the desired sector is found is called a search.
The fastest data transfer will be obtained if sequential blocks are written around a track. When that track is full it will be quickest to switch to another head and write the next track down. Only when a complete cylinder is full will it be necessary to increment the cylinder address and move the heads, which takes a finite time. Operating systems often use clusters, which are recorded on a whole number of contiguous disk blocks. It is faster to transfer a cluster than to retrieve the same amount of data scattered over various parts of the disk.
The latency of a hard drive is the sum of the seek latency and the search latency. The latency can only be described statistically. If the time taken to perform all possible seeks is added up and divided by the number of possible seeks, an average seek time will emerge. Equally the average search time is half a rotation.
Early disk drives were huge, heavy and expensive, but that all changed due to two factors. One of these is that the capacity of a drive is determined by a large number of factors, and if each one is improved only a few percent, the overall improvement is dramatic. The other factor is that the advent of the personal computer led to soaring demand for small hard drives and this inevitably drove the price down.
You might also like...
Television production these days is tricky enough without adding virtual elements like augmented reality (AR) graphics, but that’s exactly what Taipei-based production company Getop did for the live telecast of the 2020 Golden Melody Awards (GMA). The highly rated annual t…
The way consumers engage with content is constantly shifting and at a faster pace than ever before, leaving the television industry playing catch up. Broadcasters, production companies and content producers around the globe are seeing the complexities in production and…
As 5G mobile networks become more ubiquitous, the Broadcast and OTT industries are looking to leverage the technology’s speed and low-latency advantages for the management and distribution of live content. Tech companies too are now leveraging cloud-based, edge computing s…
In a time of social distancing, video professionals have turned to technology that allows them to work remotely yet collaboratively over a secure Internet connection. This remote production strategy has helped production and postproduction companies as well as video streaming,…
There was a time, not too long ago, when 100 Gigabit Ethernet (100GbE) IP switching was only considered for IT data centers moving large amounts of financial and military data. With the growth of media and the urgent need for remotely…