The mathematics of finite fields and sequences seems to be a long way from everyday life, but it happens in the background every time we use a computer and without it, an explanation of modern error correction cannot be given.
What then is a field? It has some similarity to a farmer's field, in that there is a fence around it and some stuff is kept inside the fence and some is excluded. In mathematics a field can be a range of numbers and the fence can be a circle on a Venn diagram. The numbers we use every day are part of an infinite field, so we are going to look at something different.
A good place to start is to consider a binary counter having a finite number of bits, 16, for example. If we start from all zeros and clock this counter indefinitely, it will count up. But when it reaches 65535, it can't go any higher, because that would need 17bits, where the MSB is 1 and all the other bits are zero. What happens is that the counter overflows to all zeros and starts again.
A binary counter of fixed word length is operating in a finite field. The state of the counter is described by modulo arithmetic. Suppose we have a Modulo 5 system. The count goes 012340123401234.... The state of the count is the remainder when the largest possible number of multiples of 5 has been subtracted.
In any fixed length counter, the infinite range of everyday numbers is simply wrapped around the finite field one more time on each overflow. Pure binary starts at all zeros and overflows at all ones. However, as our finite field is circular, we can re-define it to start anywhere.
If, for example, we decide it is going to start half a revolution away from zero, we have a two's complement system where half of the number range is below zero and half is above. That approach is used in most digital audio systems, where bipolar analog signals need to be digitized. Fig.1 shows a simple example of two's complement. The most significant bit becomes the sign bit.
Fig.1 - In two's complement, the circular number field is redefined to begin half a turn away from zero, so zero is in the middle of the scale and positive or negative values can be encoded.
Whilst we can create a field with a pure binary or two's complement count, we have the problem that the speed of our logic is limited by the need to support carries or borrows. We can make logic that works faster if the carries and borrows are eliminated. Instead of counting in binary in a register, what happens instead is that a polynomial is raised to higher and higher powers. That sounds heavy, but it's not.
Consider Fig.2, which is a simple circuit known colloquially as a twisted ring counter. Effectively it is a shift register, which has some feedback so the output goes to more stages than one. The register can start with any value whatsoever, except zero. In this case, there are three bits, having eight combinations, but all zeros cannot be used, so the longest sequence will go through seven different states.
It can be seen from Fig.2 that the state of the middle bit is the XOR of the feedback line and the state of the left-most bit. The sequence the system goes through is shown in Fig.2 because with only seven states it is small enough to fit on the page. However, many times the system is clocked, the same sequence just repeats. The sequence has some interesting characteristics. Firstly, it is non-sequential; it's not like a binary count, but it is still a count. To a casual observer it looks random, but in fact it is totally predictable. As a result, the output is called a pseudo-random sequence.
Fig.2 - A three bit twisted ring counter and the sequence it goes through. Mathematically it is raising the starting value a to higher and higher powers.
A system comprising n bits could, if suitably configured, create a field having a length of 2n -1. The one is subtracted because the all-zeros state isn't allowed. Such a field is called a maximum length sequence (MLS).
The sequence which twisted ring counters execute is called a Galois Field, named after Evariste Galois, a French mathematical prodigy who lived in turbulent times and who was killed in a duel. The circuit of Fig.2 has no input so the sum of what it does is always zero. It is actually raising the number a to higher and higher powers. Fig.2 shows how the powers of a translate into specific bit patterns.
As there are only 7 states in the field, a8 is the same as a. The circuit of Fig 2 is a counter, but it is counting the power of a rather than a binary number. As there is no carry between powers, the circuit can run faster than a binary counter.
Pseudo-random and maximum-length sequences are extremely useful and show up in various disciplines. In data recording and transmission, pseudo-random sequences can be added to serial data, modulo-2, in order to break up runs of identical bits that damage clock content and cause DC offsets. Fig.3 shows the principle. As the sequence is deterministic, it can soon be generated at the receiver and removed. The serial digital interface (SDI) used in television production works on that principle, as do DVB and ATSC.
Fig.4 shows the randomizer used in DVB. The sixteen-bit circuit has a sequence length of 65,535, but not all of it is used as the count is reset every eight packets.
Fig.3 - The addition, modulo-2, of a pseudo-random sequence to serial data breaks up long run-lengths and reduces any DC component.
Maximum length sequences (MLS) are also useful in acoustics. A MLS has a broad spectrum and so can be used as a noise-like excitation signal for frequency response testing. Being deterministic, it has another advantage, which is that a suitable analyzer can identify the sequence and lock to it, meaning the direct sound from the speaker is analyzed before any reflections arrive.
The pseudo-randomness of a MLS also means that two identical sequences will only correlate when they are exactly synchronous. This means that very accurate timing information can be sent over radio links. The Global Positioning System (GPS) works by very accurate timing and the synchronization process relies on correlating a MLS. Each satellite uses a difference sequence.
The addition of a pseudo-random sequence makes the data slightly harder for unauthorized recipients to decode, but as the generating polynomials of these sequences are well known, they would not result in secure encryption. Secure encryption requires the addition of something that is difficult to predict.
Not all feedback connections will result in a maximum length sequence, because if the polynomial expression will factorize, the sequence may repeat one of the factors. There is a reasonably good analogy with gearing, because gears have a finite number of teeth. If a pair of gears having 6 and 24 teeth is considered, it will be clear that after four revolutions of the small gear the system comes back to the same state because 24 is a multiple of 6.
However, if the small gear is given 7 teeth instead of 6, the system returns to its original state after the small gear has completed 24 revolutions. Good use is made of these long sequences in engineering, because when the tooth count is relatively prime, every tooth of one gear comes into contact with every tooth on the other gear, so the teeth polish themselves into the correct shape instead of amplifying wear patterns. The final drive of most vehicles will have a relatively prime tooth count.
Fig.5 - Here a twisted ring counter is used to calculate redundancy on four data bits A -D to make a 7,4 code. The three redundant bits are left in the register after the data are clocked in.
Sequences show up in television as well. The half cycle offset of subcarrier with respect to line frequency results in line pairs, but NTSC has an odd number of lines in the frame. A given relationship between H, V and subcarrier only repeated every four fields, which put constraints on editing.
Fig. 5 shows the circuit of Fig. 2, except it has a serial input. Seven bits, A through G, are clocked in. The sequence of states after each clock is shown. After seven clocks, there are three bits in the register. Fig.5 also shows the truth table, where it will be seen that there have been three selective parity checks made on four data bits A through D and the three bits left in the register are the parity bits.
If this truth table is compared with the truth table of a 7,4 Hamming code, it will be seen to be identical except for the order of the columns. The four data bits and the three redundant parity bits form a code word. If that code word is clocked into the same circuit without error, the three parity checks will succeed, and the register will be left containing three zeros. However, if there is a bit in error, the register will be non-zero.
The system makes a cyclic redundancy check (CRC). If more than seven bits are clocked in, the system will repeat the same parity check pattern. However, using more bits extends the size of the code word dramatically.
You might also like...
The explosion in digital technology that led to Compact Discs, DVD, personal computers, digital cameras, the Internet and digital television broadcasting relies heavily on a small number of enabling technologies, one of which is the use of Reed-Solomon error correcting…
The first burst error correcting code was the Fire Code, which was once widely used on hard disk drives. Here we look at how it works and how it was used.
The CRC (cyclic redundancy check) was primarily an error detector, but it did allow some early error correction systems to be implemented. There are many different CRCs but they all work in much the same way, which is that the…
Here we look at one of the first practical error-correcting codes to find wide usage. Richard Hamming worked with early computers and became frustrated when errors made them crash. The rest is history.
Error correction is fascinating not least because it involves concepts that are not much used elsewhere, along with some idiomatic terminology that needs careful definition.