Data Recording and Transmission: Part 7 - Delivering Data

In this chapter of the series “Data transmission and storage,” consultant John Watkinson investigates circuit signal propagation and delivery.

The information society relies upon transmission of data from one place to another as much as it does on storage. Broadcasting in the traditional sense and in the modern IT-based sense relies heavily on it.

It has been stressed in earlier articles that the whole point of digital technology is that by limiting the number of possible states in which a signal can exist, it becomes possible essentially to eliminate degradation of the signal. This is true of digital storage media and it remains true of digital transmission. In fact from a theoretical standpoint, the only difference between storage and transmission is that storage is a process whereby digital transmissions are preserved on a medium. A medium is a way of introducing delay into a channel.

It follows that digital transmission consists of creating signals that are capable of discrete interpretation at the receiver. Before considering that, it is necessary to consider how electrically transmitted signals travel.

Beginning with very low frequencies, electricity must have a complete conductive circuit, and source of electromotive force (EMF) to cause current to flow around it. The first telegraphs used binary signaling, where the current in the circuit was turned on or turned off by switch or key at the transmitter. The presence or absence of the current could be sensed by the magnetic field it produced at the destination. 

A binary telegraphy system having agreement between the sender and receiver about the meaning of various symbols, such as in Morse code, can truly be described as digital.

In such low frequency systems, the electrical signal travels in the conductors. The current is dominated by wiring resistance; the characteristic of the insulation that prevents short circuits is otherwise of little importance. This remains true if the signal reaches audio frequencies over the typical length of loudspeaker cables. However, once telephony attempted to transmit audio over long distances, between cities, for example, the physics changed.

The nature of the dielectric between the wires then became important. Fig.1a) shows a cable into which a positive voltage step has been launched. The voltage step can only advance as in b) if the capacitance of the cable near the leading edge is charged up. This requires a current to flow forward in the "hot" conductor and as current must flow in a complete circuit, there must be a return current in the "cold" conductor. This current loop creates a magnetic field that stores energy.

Fig.1. A driver launching a pulse into a transmission line. At a) the line must charge to allow the voltage to rise. There must be a complete circuit for the charging current. A b) the pulse has propagated and the complete circuit forms a current loop that stores energy. At c) the current loop is self-contained and propagates using stored energy.

Fig.1. A driver launching a pulse into a transmission line. At a) the line must charge to allow the voltage to rise. There must be a complete circuit for the charging current. A b) the pulse has propagated and the complete circuit forms a current loop that stores energy. At c) the current loop is self-contained and propagates using stored energy.

If the driver subsequently terminates the pulse, the situation is shown in Fig.1c). The loop current cannot stop because the stored energy in the magnetic field gives it a form of inertia. At the trailing edge of the pulse the dielectric discharges the dielectric capacitance in order to complete the loop. The current loop now rolls forward, without any assistance from the driver, charging at the leading edge, and discharging at the trailing edge. The interplay between the inductance and the capacitance gives the cable a characteristic called impedance. This affects what happens at the end of the cable. If the cable is left open circuit, the energy loop has nowhere to go. The end of the cable charges up, and launches a reflected pulse with a reversed current loop back towards the transmitter. If the end of the cable is short circuited, the current loop flows around the short and an inverted version heads back to the transmitter.

Such reflection is only prevented if the cable is terminated by characteristic impedance. Those who remember analog video will recall the need to install a terminator at the end of a video cable to prevent ghosting caused by reflections. Digital video equipment had the termination built in and it was only possible to connect one load to one driver.

The existence of a current flowing in an inductive loop is interesting. Inductive loops exhibit interplay between the magnetic field they create and the loop EMF. If the current falls for any reason, the collapsing magnetic field creates an increased EMF that tries to maintain the current. If the current increases, the strengthening magnetic field absorbs energy.

The phenomenon was first explained by Maxwell's equations. What they describe is the ability of a current loop to travel through space. This is electromagnetic radiation, the basis of everything from infra-red through broadcasting to visible light and on to X-rays. Essentially there are two interacting processes. Space has impedance and a current loop can travel in the capacitance. The magnetic field that results from the current loop contains energy. From Fleming's rule, the magnetic energy must be in a plane at right angles to the electrical field. 

Electromagnetic radiation can also be channeled in devices such as waveguides and optical fibers. Our problem is to find ways of modulating the energy in a way that allows data to be delivered. In a cable or an optical fiber, we can do as we please, because the radiation cannot escape. However, if we use radio signals, we have little control over where they go and it then becomes important to ensure that various different transmissions do not interfere with one another.

The solution adopted is to use tuned receivers that only respond to certain frequencies. Each transmission in a given locality is allocated a band of frequencies and it must not produce energy outside the allocated band.

A pure carrier wave has no bandwidth and as all of the cycles look the same, it carries no information. Once we modulate it, sidebands are created that simultaneously give it bandwidth and information capacity. In order to see where sidebands come from, it is necessary to consider a sine wave in some detail. A constant rotation having constant angular velocity will look like a sine wave when viewed from the side. When viewed from ninety degrees away, it will look like a cosine wave. 

In order to obtain a sine wave on its own, the cosine wave must be eliminated. This requires two constant rotations in opposite directions, so the sine components add and the cosine components cancel. Thus a pure sine wave has to be considered as having equal parts of positive and negative frequency. The two will be indistinguishable unless we modulate the sine wave. The modulation process essentially causes the rotation of the modulating signal further to be rotated by the carrier.

Fig.2a) shows that if we consider the clockwise rotating part of the carrier, the clockwise rotating part of the baseband will result in an increased frequency, whereas in b) the anti-clockwise rotating part of the baseband will result in a reduced frequency.

Fig.2. At a) the positive frequency component of a sine wave adds to the frequency of a carrier, producing an upper side band. At b) the negative frequency subtracts from the carrier frequency, producing a lower side band.

Fig.2. At a) the positive frequency component of a sine wave adds to the frequency of a carrier, producing an upper side band. At b) the negative frequency subtracts from the carrier frequency, producing a lower side band.

That is where upper and lower sidebands came from. If two transmitters have their carriers separated by a certain amount, the bandwidth of the modulating signals must be restricted to prevent the upper sideband of one channel interfering with the lower sideband of the other. As a direct result there will always be pressure in radio transmission to reduce bandwidth and this reflects in the modulation schemes chosen and in the use of compression.

One way of reducing bandwidth is to use more levels in the transmitted signal. For example, if eight levels are used, each symbol transmits three bits; for a given bit rate the bandwidth falls to one third. This approach was used in the 8-VSB system used for terrestrial digital transmission.

Those familiar with NTSC color television will recall that the chroma signal was modulated in amplitude and in phase. If color bars were being transmitted, the vectorscope displayed eight vectors. Six of these were at various places around the screen, corresponding to genuine colors, and two of them were superimposed in the center of the vectorscope representing black and white. Which of the two could be resolved using the luma signal. As there are eight color bars, the transmission of a single chroma vector effectively encodes three bits.

This idea forms the basis for quadrature amplitude modulation (QUAM). Each symbol carries a certain number of bits. In the case of 5 bits, 32 different vectors will be needed, each one having a different combination of phase and amplitude. When viewed on a vectorscope, the equivalent of an eye pattern in data recorders is known as a constellation, as each vector produces a point of light on the screen. 

Fig.3. a QUAM coder converts incoming data symbols into channel bits, which drive the quadrature amplitude modulators. This allows every combination of data bits to create an unique vector.

Fig.3. a QUAM coder converts incoming data symbols into channel bits, which drive the quadrature amplitude modulators. This allows every combination of data bits to create an unique vector.

The transmitted signal is the sum of the in-phase and quadrature components from the modulators. Fig. 3 shows that an incoming data symbol is converted into channel bits as in many recording code. However, the channel bits from each symbol are divided into two sets. One set controls the amplitude of the in-phase signal and the other controls the amplitude of the quadrature signal.

Clearly as more bits are encoded per symbol, the number of vectors in the constellation increases and ultimately it becomes impossible to distinguish them because of noise. In satellite broadcasts the transmitter power is limited, but directional antennae such as dishes can be used, so the trend is to employ more bandwidth and fewer bits per symbol. In terrestrial transmissions, transmitter power is not an issue, but bandwidth is, so the trend is towards less bandwidth and more bits per symbol. 

Editor Note:

Other John Watkinson articles you may find interesting are shown below. A complete list of his tutorials is available on The Broadcast Bridge website home page. Search for “John Watkinson”.

John Watkinson, Consultant, publisher, London (UK)

John Watkinson, Consultant, publisher, London (UK)

You might also like...

Designing IP Broadcast Systems: Where Broadcast Meets IT

Broadcast and IT engineers have historically approached their professions from two different places, but as technology is more reliable, they are moving closer.

Comms In Hybrid SDI - IP - Cloud Systems - Part 2

We continue our examination of the demands placed on hybrid, distributed comms systems and the practical requirements for connectivity, transport and functionality.

KVM & Multiviewer Systems At NAB 2024

We take a look at what to expect in the world of KVM & Multiviewer systems at the 2024 NAB Show. Expect plenty of innovation in KVM over IP and systems that facilitate remote production, distributed teams and cloud integration.

NAB Show 2024 BEIT Sessions Part 2: New Broadcast Technologies

The most tightly focused and fresh technical information for TV engineers at the NAB Show will be analyzed, discussed, and explained during the four days of BEIT sessions. It’s the best opportunity on Earth to learn from and question i…

Standards: Part 6 - About The ISO 14496 – MPEG-4 Standard

This article describes the various parts of the MPEG-4 standard and discusses how it is much more than a video codec. MPEG-4 describes a sophisticated interactive multimedia platform for deployment on digital TV and the Internet.