A Practical Guide To RF In Broadcast: Broadcast Modulation

Defining the different types of RF modulation, power levels, regulatory standards and licensing for CW, AM, FM, and TV RF transmission.

Modulation of a RF carrier wave is what differentiates it from a continuous wave, switched on and off to wirelessly transmit messages using Morse code. Several types of modulation make broadcasting to the public possible.

Morse code was the first way to add intelligence to a continuous wave (CW) radio signal, making dots and dashes the original manual modulation technique for wireless communications. Radio waves are electromagnetic alternating current (AC) waves between approximately 30 Hz and 300 GHz.

RF transmitters are relatively simple. At the most basic level, an RF transmitter consists of a power supply, an oscillator to create the original carrier wave at a specific frequency, and tuned power amplifiers connected by a feedline to an antenna. Modulating a broadcast transmitter with an exciter is where RF gets a bit more tricky.

An exciter (also known as a modulator) creates the original modulated carrier wave at a specific frequency that a transmitter amplifies and sends to a broadcast antenna. The output of a TV exciter is typically 100 mW (0.1 watts). AM and FM transmitters also use exciters to ‘drive’ the transmitter’s power amplifiers (PAs). Amplitude Modulation (AM) varies the amplitude of the carrier signal with the modulation signal. Frequency Modulation (FM) varies the frequency of the carrier signal with the modulation signal.

Analog TV transmission combines a 4.5 MHz AM signal carrying the video with an FM signal for the audio within a 6MHz channel. Vestigal sideband (VSB) uses the upper and lower sidebands of the AM signal to increase bandwidth. 8VSB is the modulation method used for analog NTSC transmission as well as ATSC 1.0 digital transmission. 8VSB modulation converts a binary stream into an octal representation by amplitude-shift keying a carrier to one of eight levels. The bit rate of a 6 MHz channel modulated by 8VSB is 19.39 Mbps.

Coded orthogonal frequency-division multiplexing (COFDM) is the modulation method used to transmit ATSC3.0 and DVB-T signals. COFDM uses forward error correction and time/frequency interleaving to overcome errors. The basis of COFDM is frequency-division multiplexing (FDM), where all subcarrier signals in a channel are perpendicular to one another.

Cable systems and most MVPDs use quadrature amplitude modulation (QAM) modulation for signal distribution to customers. Thus, a cable-ready TV set must be able to decode QAM and 8VSB. ATSC 3.0 (NextGen TV) sets decode COFDM signals.

DVB, DVB-T and DVB-T2

The key obstacle for DTV transmission is the bandwidth of legacy analog TV channels. Depending on the country, TV channels can have a legal bandwidth of 6, 7, or 8 MHz. In the USA, 6 MHz channels are the standard. Thus, the challenge of broadcasting DTV is to transmit as much data as possible in a 6 MHz wide signal. Fortunately, digital audio and video can be compressed.

Digital Video Broadcasting (DVB) uses coded orthogonal frequency-division multiplexing (OFDM) modulation that supports hierarchical transmission, also known as layered modulation. OFDM is a type of digital transmission and a method of encoding digital data on multiple carrier frequencies. In OFDM, multiple, closely spaced, overlapping orthogonal subcarrier signals are transmitted to carry data in parallel.

Terrestrial TV uses DVB-T transmission. DVB-T uses OFDM transmission and supports QPSK, 16QAM and 64QAM modulation schemes. DVB-T multiplexes compressed video, audio, and data streams into MPEG program streams (MPEG-PSs).

One or more MPEG-PSs joined together create an MPEG transport stream (MPEG-TS). A MPEG-TS is a sequence of 188-byte packets. A first level of error correction is applied to transmitted data that allows correction of up to 8 wrong bytes in each 188-byte packet. A single DVB-T signal can be transmitted on a 6, 7, or 8 MHz TV channel.

Hierarchical transmission is a signal processing technique for multiplexing multiple data streams into a single symbol stream. It is used to mitigate the digital cliff effect. Hierarchical transmission can simultaneously transmit two different MPEG-TSs that are typically used to transmit the same content in SDTV and HDTV on the same carrier. This allows weak signals, backed up by a lower quality fallback signal, to gracefully degrade instead of instantly disappear. The DVB standard has been adopted by approximately 60 countries in Europe, Africa, Asia, and Australia.

DVB-T2 was finalized in 2011 and stands for “Digital Video Broadcasting – Second Generation Terrestrial,” an extension of DVB-T. It transmits digital audio, video, and other data in “physical layer pipes” (PLPs), using OFDM modulation and provides a higher bit rate than DVB-T.

ATSC

The Advanced Television System Committee (ATSC) ATSC 1.0 standard uses eight-level vestigial sideband (8VSB) for terrestrial broadcasting. This standard has been adopted by 9 countries including the United States, Canada, Mexico, and South Korea. Current ATSC 1.0 uses the H.264/MPEG-4 video codec, capable of 10 bits/pixel, or 1024 colors per channel. Early ATSC 1.0 signals were MPEG-2, 8 bits/pixel capable of 256 colors per channel. ATSC 1.0 supports one bit rate of 19.4 Mbps.

ATSC 3.0 supports RF transmission at UHD resolution of 3840x2160 at 60 Hz, although broadcasting a UHD channel OTA uses most of the 6 MHz TV channel. Instead, ATSC 3.0 uses a hybrid of OTA RF delivery and the internet to deliver UHD TV pictures and other content to home viewers. It transmits an HDTV signal over the air and sends additional UHD detail data over the internet. The OTA signal and the detail data are combined at the receiver to recreate and display a UHD picture.

ATSC 3.0 is a complex technology. A complete explanation of all it can do and how it all works is beyond the scope of this story.

Broadcasting IP

ATSC 3.0 is essentially IP over the air. It uses a physical layer based on orthogonal frequency-division multiplexing (OFDM) modulation with low-density parity-check code (LDPC) forward error correction codes. In a 6 MHz TV channel the bit rate can range from 28 to 36 Mbps or higher depending on the parameters being used. It is limited to four simultaneous physical layer pipes (PLPs) in a channel that may have different levels of robustness, like DVT-T2. PLPs are logical channels carrying one or more services, with a modulation scheme and robustness level particular to that individual pipe. DVB-T and ATSC 1.0 do not have PLPs.

ATSC 3.0 uses 10 bits/pixel and H.265 HEVC for transmission. Layers in the ATSC 3.0 protocol stack include system discovery and signaling, the physical layer using OFDM, internet protocols, and HTML5 applications.

Each frame of ATSC 3.0 video begins with a bootstrap signal which allows a receiver to discover and identify signals being transmitted. The bootstrap signal can also carry information to wake up a receiver so it can detect, receive, and display an emergency alert message when the TV set is turned off. It also contains a ‘preamble’ to support frame decoding and format, and the ‘payload’ video frame data.

HEVC compression supports video channels up to UHD resolution at 120 frames per second, wide color gamut, high dynamic range, Dolby AC-4 and MPEG-H 3D Audio, datacasting capabilities, and more robust mobile television support. ATSC 1.0 uses Dolby AC-3 for 5.1 channel surround sound. ATSC 3.0 uses Dolby AC-4 for up to 7.1.4 channel sound and it supports object-based audio formats like Dolby Atmos.  It also supports MPEG-H 3D Audio, which can provide up to 64 loudspeaker channels for immersive audio.

Because ATSC 3.0 is IP, it is well suited for private datacasting and services such as the ‘broadcast internet’ for inexpensive one-to-many data distribution. 

ATSC 3.0 doesn’t need an internet connection to be viewed OTA on a NextGen TV with an antenna, but it does require an internet connection to access many of its new features such as interactivity, VOD, hybrid UHD, targeted advertising, targeted public alerting and more.

You might also like...

Standards: Part 18 - High Efficiency And Other Advanced Audio Codecs

Our series on Standards moves on to discussion of advancements in AAC coding, alternative coders for special case scenarios, and their management within a consistent framework.

HDR & WCG For Broadcast - Expanding Acquisition Capabilities With HDR & WCG

HDR & WCG do present new requirements for vision engineers, but the fundamental principles described here remain familiar and easily manageable.

What Does Hybrid Really Mean?

In this article we discuss the philosophy of hybrid systems, where assets, software and compute resource are located across on-prem, cloud and hybrid infrastructure.

HDR & WCG For Broadcast - HDR Picture Fundamentals: Color

How humans perceive color and the various compromises involved in representing color, using the historical iterations of display technology.

The Streaming Tsunami: Testing In Streaming Part 2: The Quest For “Full Confidence”

Part 1 of this article explored the implementation of a Zero Bug policy for a successful Streamer like Channel 4 (C4) in the UK, and the priorities that the policy drives. In Part 2 we conclude with looking at how Streamers can move…