Timing: Part 6 - Synchronization

The need for synchronization rears its head in so many different endeavors that it has to be accepted as one of the great enabling technologies.

Around the First World War the airplane began to be relevant to warfare and it was soon discovered that synchronization of the guns with the rotation of the airscrew was a good idea, so the rounds passed between the blades.

From that was developed an explanation of the boxcar detector, an electronic device formally known as a synchronous rectifier. The explanation held that it was possible to shoot someone on the other side of a railroad track even in the presence of a train, by firing between the boxcars.

The synchronous rectifier shows up in the decoder needed to convert the chroma signal of composite video back to a pair of baseband color difference signals. In NTSC, the I signal modulates the amplitude of a subcarrier and the signal modulates a quadrature subcarrier. The two are orthogonal and can be added together.

Fig.1 - A typical data block contains a preamble to synchronize the PLL and a sync pattern to enable parsing. The preamble has high frequency accuracy but low timing content and the sync pattern is the reverse.

Fig.1 - A typical data block contains a preamble to synchronize the PLL and a sync pattern to enable parsing. The preamble has high frequency accuracy but low timing content and the sync pattern is the reverse.

To recover the baseband signals, the chroma signal needs to be sampled four times per cycle. When correctly phased the samples will represent I, Q, -I and -Q repeating. The reason for the color burst now becomes apparent. The burst provides the reference for the timing of the four samples.

The sampling process does more than separate the components. Sampling the chroma signal at subcarrier frequency produces upper and lower sidebands. The lower sideband signals are the wanted I and Q base band signals.

Analog signals are corrupted by noise and in the digital domain we get bit errors. It is often thought that these are the greatest source of corruption in storage and transmission, but it's not the case. The greatest potential source of data corruption is loss of synchronization. In analog television loss of sync causes the picture to tear and/or frame roll. The whole picture is affected

In almost every data handling situation, data are assembled into blocks. This allows multiplexing in networks and it allows use of discontinuous media such as hard disks. It simplifies the use of error correction. Every block has a closely specified structure in which everything is assembled in the right place. Once assembled, the block is typically serialized so it can be fed into a network or a recording head.

The whole system relies on being able correctly to pick apart or parse the block when it is received or retrieved. That means being able to identify the first bit of the serialized block without fail. But suppose that mechanism does fail and the first bit is wrongly identified as one bit further along than it should be. What will happen? Very simply total chaos, because what we think is a data byte is actually seven bits of one byte and one bit of another. That will be true of the whole block.

Any checksum or error detection system will fail. Error correction systems will declare the block to be uncorrectable. What happens next depends on the application. In a fly-by-wire or by-light airplane, there is spatial redundancy and the same blocks are sent along multiple routes. One of the others will get through and the key action is to ignore the block that lost sync.

In a non-real time file transfer the network could re-transmit the block, which causes a small delay. A hard drive could retry by waiting one revolution, again causing a delay.

In a real time system that relies on forward error correction, or in systems designed for low latency, retries and retransmissions cannot occur and the receiver has to make the best of the lost block using, for example, concealment.

Fig.1 shows the beginning of a typical data block. There is a preamble and a sync pattern. The purpose of the preamble is to allow a phase locked loop in the data separator to lock to the actual symbol rate of the data. The preamble is analogous to the burst in NTSC and works in the frequency domain. It takes an unknown time for a phase-locked-loop to reach lock and when it does so it could be almost anywhere in the preamble.

Fig.2 - The TRS-ID of SDI. The first three symbols are the sync pattern, which cannot occur elsewhere due to any legal data patterns.

Fig.2 - The TRS-ID of SDI. The first three symbols are the sync pattern, which cannot occur elsewhere due to any legal data patterns.

In contrast the synchronizing pattern works in the time domain and its detection enables the data separation and parsing processes. The detection of the sync pattern is a one-off event and the system must be designed to minimize false detections. Sync patterns just look like a few bits strung together, but there's a lot more to it than that.

Sync patterns are designed to have low autocorrelation functions. Autocorrelation is the correlation between a signal and the same signal shifted in time by various amounts. The ideal sync pattern is one in which there is poor correlation at all likely shifts as this minimizes the chances that false sync will be detected. White noise makes a good sync pattern as its autocorrelation function is a singularity. GPS uses a pseudo-random sequence, which for all practical purposes is noise.

In most data transmission, we are only interested in the state of individual bits, and the granularity of the message doesn't interest us. However, in GPS we are critically interested in the time at which messages are sent and the accuracy required is a lot smaller than the length of a bit.

If the pattern of bits used for synchronizing happened to show up in the data block there could be false sync detection. Things are worse than that because false sync patterns can be created accidentally at the boundary between two data symbols, which by chance have the appropriate bit patterns.

The serial digital interface (SDI) is an early approach to digitizing video and is basically a numerical representation of the whole component video waveform with one exception, which is that the analog sync pulses are not digitized. Instead synchronizing patterns, known as timing reference signals (TRS) are placed at the beginning and end of the digitized active line.

Fig.2 shows the structure of the TRS has a symbol of all ones followed by two of all zeros. These values represent the extremes of a binary coding scale and to prevent false syncs, no video data are allowed to have those values. The legal luminance gamut is slightly smaller than the coding scale so codes at the ends of the scale are illegal.

The color difference signals must be represented using offset binary, as a two's complement representation would requires the all ones and all zeros codes to be legal. Using two symbols of all zeros in TRS means that no junction of two or more legal data symbols can create a false TRS.

Fig.3 - The sync pattern of AES/EBU has a run length of 1.5 bits, which cannot occur in audio data.

Fig.3 - The sync pattern of AES/EBU has a run length of 1.5 bits, which cannot occur in audio data.

Ancillary data, such as audio, can be sent during blanking on SDI, and the ancillary data must also be encoded such that false syncs cannot be generated. This is done using odd parity, so codes of all ones and all zeros cannot occur.

The TRS code of SDI carries an additional symbol that makes up for the information lost by omitting the analog sync pulses. Three bits, F, V and H are included there. The F bit determines whether the associated field is odd or even. The V bit denotes vertical blanking and the H bit denotes the start or the end of the active line.

Another way of avoiding false syncs is to design the synchronizing system alongside the channel coding system such that the two co-operate. Fig. 3 shows the synchronizing system of the AES/EBU digital audio interface. This interface uses an FM channel code in which there is always a transition between the data bits and data one is denoted by an extra transition. FM is described as a run length limited code, because transitions generated by real data can never be further apart than one bit nor nearer together than half a bit.

In the AES/EBU interface, the synchronizing pattern incorporates a run length of one and a half bits. As this violates the normal coding rules, a sync pattern can never be accidentally created by any real data pattern. This means that there is no need to restrict the encoded data in any way and indeed in AES/EBU all data values and combinations are legal, including all zeros, which represents silence or muting in two's complement coding.

Fig. 4 - The sync pattern of EFM has two maximum run lengths of 11 channel bits each. Coding rules prevent audio data creating such a pattern.

Fig. 4 - The sync pattern of EFM has two maximum run lengths of 11 channel bits each. Coding rules prevent audio data creating such a pattern.

In optical disks, the frequency response of the pickup is triangular and the signal to noise ratio is a function of frequency, being very good at low frequencies. The Compact Disc uses a channel code called EFM in which eight data bits are expressed as selected combinations of 14 channel bits. The selected combinations are subject to strict run length limits.

As with AES/EBU, the synchronizing pattern of CD violates the data encoding rules because it consists of three transitions spaced eleven channel bits apart; a pattern that cannot occur due to any data combination. The sync pattern shown in Fig.4 with its sparse transitions has very low autocorrelation away from the correct timing. Clearly an error during the sync pattern would prevent it being detected, but the sync pattern uses the lowest frequencies in the channel, which have the best SNR.

You might also like...

Designing IP Broadcast Systems: Where Broadcast Meets IT

Broadcast and IT engineers have historically approached their professions from two different places, but as technology is more reliable, they are moving closer.

Comms In Hybrid SDI - IP - Cloud Systems - Part 2

We continue our examination of the demands placed on hybrid, distributed comms systems and the practical requirements for connectivity, transport and functionality.

NAB Show 2024 BEIT Sessions Part 2: New Broadcast Technologies

The most tightly focused and fresh technical information for TV engineers at the NAB Show will be analyzed, discussed, and explained during the four days of BEIT sessions. It’s the best opportunity on Earth to learn from and question i…

Standards: Part 6 - About The ISO 14496 – MPEG-4 Standard

This article describes the various parts of the MPEG-4 standard and discusses how it is much more than a video codec. MPEG-4 describes a sophisticated interactive multimedia platform for deployment on digital TV and the Internet.

Chris Brown Discusses The Themes Of The 2024 NAB Show

The Broadcast Bridge sat down with Chris Brown, executive vice president and managing director, NAB Global Connections and Events to discuss this year’s gathering April 13-17 (show floor open April 14-17) and how the industry looks to the show e…