A Practical Guide To RF In Broadcast: Codecs & Encoding
Here we look at codecs and encoding for digital RF modulation such as ATSC 3.0, DVB and other digital OTA standards, and network requirements for encoding and delivering streaming internet video.
All 12 articles in this series are now available in our free 64 page eBook ‘A Practical Guide To RF In Broadcast’ – download it HERE.
All articles are also available individually:
The idea of compressing analog video by transmitting only the portions of the scene that changed frame-to-frame was proposed by R.D. Kell in 1929. Bell Labs proposed Differential Pulse Code Modulation (DPCM) for digital video compression in 1952.
A codec is software (often running on dedicated hardware) that encodes or decodes a digital data stream. Understanding codecs is an essential part of the life of an RF engineer.
Most video coding aka ‘compression’ formats are written and approved by standardization organizations and committees such as JPEG, MPEG, SMPTE, ATSC and others.
Most video compression is based on Discrete Cosine Transform (DCT) coding and motion compensation.
Video content, encoded using an approved format, is typically bundled with an Advanced Audio Coding (AAC) encoded audio stream, and delivered inside a multimedia container format such as AVI, QuickTime, MOV and MP4, containing H.264 or H.265 files.
A Little Codec History
The first digital video coding standard was H.120, introduced by the ITU-T in 1984, and it was not efficient enough for video at 1984 processor and network speeds. H.261 debuted in 1988. MPEG-1 was developed by the Motion Pictures Expert Group (MPEG) in 1991 to compress VHS-quality video. QuickTime was also created in 1991 and in 1998 the ISO approved the QuickTime file format as the basis of the MPEG-4 standard.
MPEG-2/H.262 was introduced in 1994 and became the standard for DVDs and SD DTV. MPEG-2 is a DCT algorithm that can deliver up to 100:1 compression. In 1999, MPEG-4/H.263 was introduced. In 2003, MPEG-4/H.264/AVC was introduced. H.264 is the standard for Blu-ray disc players, YouTube, Netflix Vimeo and the iTunes Store. In 2013, High Efficiency Video Coding HEVC/H265/MPEG-H was introduced for UHD Blu-ray, UHD streaming, DVB, ATSC 3.0, macOS High Sierra and iOS11.
Transport Streams
Today, nearly all video is a ‘stream’ in one form or another, some in coaxial cable such as SDI and some in Gb Cat 5 IP such as NDI. Both are high-bandwidth digital video transmission standards for production.
An MPEG transport stream (MPEG-TS) is a digital container for audio, video, and Program and System Information Protocol (PSIP) data over an IP connection. A transport stream wraps numerous sub-streams which are often packetized elementary streams (PESs). It wraps the main data stream using the MPEG codec or a non-MPEG video codec such as JPEG 2000.
Each MPEG-TS stream is divided into 188-byte or less packets and interleaved together. Network packets are typically 188-bytes that begin with a sync byte and a header, but the communication medium may add additional information. The packet size was originally compatible with Asynchronous Transfer Mode Systems (ATM) defined by the American National Standards Institute (ANSI) and ITU-T.
Many MPEG-TS streams such as multiple TV channels can be mixed. Other containers such as AVI, MOV, and MP4 usually wrap each frame into a single packet. MPEG-TS streams are typically constant bitrate (CBR). ATSC 1 strictly requires CBR on the transport stream by using null packets (all zeros) to fill in bit gaps.
ATSC 3.0
ATSC 1.0 supports one bit rate of 19.4 Mbps. ATSC 3.0 can use both CBR and Variable Bit Rate (VBR) encoding. ATSC 1.0 uses 8VSB modulation to deliver TV and data separately to viewers or customers. All ATSC 3.0 content is delivered using standard internet IP. It also delivers more bits/Hz by using Orthogonal Frequency-Division Multiplexing (OFDM) modulation. Within a 6 MHz TV channel, ATSC 3.0 can deliver from 1-57 Mbit/s at 10 bits/pixel with a total of 64 physical layer pipes (PLPs).
A PLP is a separate channel that has its own robustness and bit rate. A maximum of four simultaneous PLPs per channel provides maximum bandwidth. PLPs allow separate TV channels on the same RF channel. The PLP originated with DVB-T2 using OFDM modulation with forward error correction and interleaving known as concatenated channel coding.
Also, ATSC 3.0 uses a bootstrap signal to allow a receiver to discover and identify all the ATSC 3.0 signals being transmitted. The bootstrap signal can also wake up an ATSC 3.0 receiver to bring critical emergency information into homes. Beyond the bootstrap, ATSC 3.0 is a highly detailed suite of standards, recommended practices, and protocols, all of which are available in detail at Technical Documents - ATSC : NextGen TV.
For TV station engineers and transmitter engineers performing their daily duties, ATSC 3.0 is simply a new way to modulate a broadcast TV RF signal with some additional features and channels such as a broadcast group’s unique NextGen TV app. In some ways, transmitting an ATSC 3.0 signal is like transmitting an ATSC 1.0 signal. The difference between the two is how the ATSC 3.0 signal is built as it is multiplexed and distributed. In terms of TV transmission, like always there’s not much to worry about so long as everything is working fine. When something goes wrong, an engineer must find the problem and fix it, hopefully before viewers notice, by identifying what data is being transmitted, how well or not, and why not. It’s not something that can be easily fixed by evaluating a TV video monitor display.
Easier Than They Look
Resolving ATSC 3.0 issues requires specialized T&M gear capable of displaying and analyzing the data in every stream and pipe. You’re not going to see anything helpful on a waveform monitor or vector scope.
Many ATSC 3.0 T&M gear manufacturers were also key contributors in the creation of ATSC 3.0 format standards and practices. They wrote and understand ATSC 3.0 and make monitoring it and managing it with MPEG analyzers, MPEG -TS monitors, and ATSC 3.0 analyzers and engineering alarms much easier to use for station troubleshooters. In some ways there’s much more to monitor, but manufacturers are working on easier ways to monitor it all in real time 24/7.
ATSC 3.0 is loaded with data and is likely to contain significantly more data as it moves forward. However, ‘loaded’ is nothing close to the data required for a 4K/60 video signal. On the other hand, digital data may be the future of TV broadcasting revenue. The ‘Broadcast Internet’ may put broadcast ‘high towers and high power’ to a more profitable use. A future The Broadcast Bridge chapter on ‘The future of OTA TV’ will investigate group-owned, tall tower, high power, broadcast internet plans. The only way to compete with ISPs is to provide the best service to many customers at a lower price.
You might also like...
Microphones: Part 3 - Human Auditory System
To get the best out of a microphone it is important to understand how it differs from the human ear.
HDR Picture Fundamentals: Camera Technology
Understanding the terminology and technical theory of camera sensors & lenses is a key element of specifying systems to meet the consumer desire for High Dynamic Range.
IP Security For Broadcasters: Part 2 - The Problem To Be Solved
By assuming that IP must be made secure, we run the risk of missing a more fundamental question that is often overlooked: why is IP so insecure?
Standards: Part 22 - Inside AIFF Files
Compared with other popular standards in use, AIFF is ancient. The core functionality was stabilized over 30 years ago and remains unchanged.
The New Frontier Of Interactive Rights: Part 1 - The Converged Entertainment Paradigm
Interactive Rights are at the forefront of creating a new frontier in the media industry. Driven by the Streaming era, but applicable to all forms of content platforms, Interactive Rights hold an important promise – to deeply engage the modern viewer i…