ATSC 3.0 Offers New and Flexible OTA Delivery Options for Broadcasters
The ATSC 3.0 standard affords broadcasters new ways to address the issue of UHD/HD/SD delivery across diverse reception conditions.
The broadcast industry is on the cusp of a new era with the launch of ATSC 3.0 This flexible platform supports multiple types of encoding, which can result in viewers always getting the best available signal over a variety of reception conditions.
The ATSC 3.0 suite of standards defines a completely new system for delivering television with some very promising goals:
- A new viewing experience though higher quality audio, video, personalization and interactivity,
- The possibility to address new consumer behaviors and preferences,
- A more efficient and flexible use of the spectrum,
- Leverage the power of broadcasting and broadband,
- New services and new business models.
To reach these objectives, the ATSC 3.0 system has been designed for both broadcast RF and broadband networks.
The ATSC 3.0 Physical Layer brings unprecedented flexibility including a wide range of modulation parameters, including the ability to manage robustness versus bit-rate trade-offs with different PLPs (Physical Layer Pipes) within a single channel. The technology also provides future proof extensibility through different optional mechanisms.
Layered Division Multiplexing (LDM)
Among these mechanisms is Layered Division Multiplexing (LDM). LDM supports transmitting two signals over a single RF channel. One signal (the Core Layer) is robust and can address receivers under difficult receiving conditions (e.g. indoor or mobile). As a counterpart, it has a limited bandwidth.
The other signal (the Enhancement Layer) offers a high bandwidth signal. While being less robust, it targets receivers that enjoy good receiving conditions, like those with roof-mounted antennas. This two-layer technique allows ATSC 3.0 to address all reception conditions (indoor, mobile and fixed) over a single RF channel.
Two-Layer Modulation
The process begins with LDM, a constellation superposition technology that combines two modulation schemes (the two layers) into one RF channel. Each layer can consist of one or several PLPs. The Core Layer uses an equal or more robust ModCod (Modulation/Coding) combination than the Enhanced Layer. Each PLP may use a different FEC encoding (including code length and code rate) and constellation mapping.
For a 2-layer, 2-PLP LDM transmission, the code length will typically be the same, while the code rate and constellations will be different. For example, the Core Layer might use Ninner=64800, code rate=4/15 and QPSK, while the Enhanced Layer might use Ninner=64800, code rate=10/15 and 64QAM. The Core PLPs and Enhanced PLPs are combined in an LDM Combiner block.
An injection level controller is used to reduce the power of the Enhanced Layer relative to the Core Layer and therefore the desired transmission energy for each layer. The transmission energy level is chosen in combination with the ModCod parameters in order to achieve the desired coverage area while setting the desired bit rate. The Enhanced Layer injection levels relative to the Core Layer are selectable from 0 dB to 25 dB below the Core Layer. LDM is a key benefit of ATSC 3.0 because it allows each broadcaster to select the best trade-offs in terms of signal robustness and capacity for a given market.
SHVC: A Second Coding Option
High Efficiency Video Coding (HEVC) was designed to address the increasing bitrate demands of new video formats that come with higher resolutions and increased frame rates. The original HEVC standard was defined by the Joint Collaborative Team on Video Coding (JCT-VC) back in early 2013. HEVC reduces required bitrate by a factor of 2, compared to its predecessor H.264. The HEVC standard also supports the rapid deployment of emerging services in UHD at 50/60 fps.
A newer version of the compression standard is called SHVC (Scalable High Efficiency Video Coding). SHVC can be considered HEVC but with the additional option of scalability. A key benefit of SHVC is that it uses one bitstream to serve multiple receivers – one size fits all.
Scalable video coding processes the signal in several layers. Each layer corresponds to a different quality/resolution level of the content. These quality levels may represent different spatial or temporal resolutions, qualities, bit depths even color gamuts.
The first layer, called Base Layer (BL), represents the lowest quality level to be encoded and is fully backward compatible with a standard HEVC decoder. The Enhancement Layers (ELs) represent higher quality levels and use coding information from lower layers (BL or lower ELs) to reconstruct the video. All video layers are then multiplexed into a single scalable video bitstream.
Even though SHVC is able to deliver different levels of quality to multiple types of receivers, it requires less bandwidth than an equivalent multi-encoding system. For instance, a SHVC solution on a HD to UHD scalable scenario provides a gain of 20% - 35% in terms of rate-distortion compared to a simulcast coding configuration.
Why Combine SHVC and LDM?
Because LDM is a native hierarchical modulation technique, ATSC 3.0 is specifically well suited to the transmission of hierarchically encoded audio and video with an SHVC codec. The base signal (e.g. an HD imagery) is carried in the robust Core Layer while an enhancement signal (e.g. the enhancement from HD to UHD) is carried in the high capacity Enhancement Layer.
With this technique, ATSC 3.0 receivers can display a HD picture when the reception conditions are poor (mobile, indoor) and full UHD when reception improves.
The use case is obvious. The broadcaster can optimally deliver the same content to both mobile receivers and residential locations at the lowest bandwidth cost. The solution provides the benefit of offering the highest possible robustness for mobile viewers and the highest picture quality for fixed, residential viewers.
A Practical Solution
Beyond this hierarchical encoding case of SHVC and LDM, a new application exists for the LDM feature for receivers operating under difficult reception conditions. Sometimes these receivers must be addressed via a regular TV channel without using additional frequencies. The next solution offers the potential for additional monetization with new use cases.
Figure 1 below shows how the SHVC LDM system can be implemented, providing scalable delivery of HD and UHD signals to two types of audiences (mobile and residential).
Figure 1. The dual coding paths of SHVC and LDM can be key to successfully growing a broadcaster’s viewership by providing consistent and high-quality signals. Click to enlarge.
The UHD content is first processed by a SHVC encoder, which creates the HD signal in the Base Layer (in full HEVC backward compatibility). The encoder also creates the enhancement signal (from HD to UHD) for the Enhancement Layer.
The two layers are passed to the ATSC 3.0 modulator which inserts the SHVC Base Layer (HD content) into the robust Core Layer of LDM and the SHVC Enhancement Layer (HD-UHD) into the Enhancement Layer of LDM.
Receivers with low-loss reception can successfully demodulate both LDM layers and their SHVC decoders can reconstruct the full UHD video. Receivers with high-loss reception will only demodulate the robust Core Layer of LDM and their HEVC decoder will output an HD video signal. It is also possible to have an auto-adaptive process where receivers can automatically switch between HD and UHD qualities depending on reception conditions.
With the launch of ATSC 3.0, broadcasters now have the tools needed to serve a wide range of receivers and audiences. The benefits of SHVC and LDM can be key to enabling broadcasters to successfully compete in a world of increasing content-delivery options.
Mickaël Raulet– Advanced Research Manager at ATEME
Eric Pinson, Head of Terrestrial Transmission Business Unit at TeamCast
You might also like...
Standards: Part 18 - High Efficiency And Other Advanced Audio Codecs
Our series on Standards moves on to discussion of advancements in AAC coding, alternative coders for special case scenarios, and their management within a consistent framework.
HDR & WCG For Broadcast - Expanding Acquisition Capabilities With HDR & WCG
HDR & WCG do present new requirements for vision engineers, but the fundamental principles described here remain familiar and easily manageable.
What Does Hybrid Really Mean?
In this article we discuss the philosophy of hybrid systems, where assets, software and compute resource are located across on-prem, cloud and hybrid infrastructure.
HDR & WCG For Broadcast - HDR Picture Fundamentals: Color
How humans perceive color and the various compromises involved in representing color, using the historical iterations of display technology.
The Streaming Tsunami: Testing In Streaming Part 2: The Quest For “Full Confidence”
Part 1 of this article explored the implementation of a Zero Bug policy for a successful Streamer like Channel 4 (C4) in the UK, and the priorities that the policy drives. In Part 2 we conclude with looking at how Streamers can move…