Managing the “FUD” of Repack and ATSC 3.0

The acronym “FUD” stands for “fear, uncertainty, and doubt”. FUD neatly describes the unsettled attitude of many Directors of Engineering (DOEs) towards the approaching double-whammy of the FCC’s Spectrum Repack, and the advent of ATSC 3.0 Over-the-Air (OTA) broadcasting.

There’s a good reason why DOEs are feeling the FUD about both. Their plates are already full, and yet they are somehow expected to make time for the repack and implementing ATSC 3.0. While there is certainly a palpable excitement at the new opportunities ahead, a sense of foreboding always accompanies such significant changes.

Fortunately for these DOEs, there are stress-reducing ways to manage Repack and lay the groundwork for a clean ATSC 3.0 transition. Through intensive planning and adoption of more efficient and flexible technologies that come with IP and the cloud, the FUD, while still present until the job is complete, can gradually subside.

The combination of the spectrum auction and implementation of ATSC 3.0 had many engineers wondering about the next steps.

The combination of the spectrum auction and implementation of ATSC 3.0 had many engineers wondering about the next steps.

To successfully take on a multi-stage, multi-site transition such as Repack or ATSC 3.0, efficiency and contingency planning is key to avoid costly delays which could also impact other stations. Repack rollout schedules often reveal dependencies that stations must tackle sequentially. Hence, any mitigation strategy, such as end-to-end signal path oversight, that avoids a potential domino effect is a great insurance policy to have. To effectively understand what challenges lie ahead, it’s necessary to understand what challenges your signal path and quality face today.

From a monitoring perspective, legacy boxes will only get you so far. The more channels and streams added – and, if distribution to cable and satellite systems is part of the architecture, the more headends served – the higher the cost and complexity. Troubleshooting grows more time-consuming and labor-intensive if you are having to lug around expensive analyzers and other test equipment in the field. And analysis of signal quality and performance trends? Good luck.

This is why moving to a distributed, cloud-based monitoring infrastructure makes sense both today and moving forward. Backed by sophisticated software with a centralized architecture, DOEs can quickly establish a well-informed, reasoned, strategic approach to preparing for the Spectrum Repack; and gearing up for the move to ATSC 3.0.

Baseline View

The first order of business should be to benchmark what you have now. A central goal of every DOE is to avoid complaints from the station/network and viewers alike. A survey of the current broadcast operation before making changes will minimize the complaints that arise during the Repack and ATSC 3.0 transitions. It is good to know what is working, what’s not, and what the weak spots are before making changes and wondering if a problem was there previously or is exacerbating the issue.

With the proper technology a broadcast network can be monitored, documented and checked on a continuing basis. Shown here is Qligent report on continuity count errors versus video bitrates on a broadcast link.  Click to enlarge.

With the proper technology a broadcast network can be monitored, documented and checked on a continuing basis. Shown here is Qligent report on continuity count errors versus video bitrates on a broadcast link. Click to enlarge.

Repack related changes will mostly correlate with the RF plant, but it is still vital to have end-to-end oversight to quickly capture and diagnose any system abnormalities. RF performance alone won’t tell the entire story. A full analysis requires dialing into the QoS-oriented physical and transport layers and examining the impacts they may have on the Audio, Video, and Data service layers.

Understanding how signals are moving through these layers now is particularly important for Repack, as most broadcasters will likely find themselves in a UHF channel sharing scenario. Most broadcasters will also need to plan for some sort of temporary signal re-encoding, re-distribution and/or re-multiplexing which will require testing and verification. If sharing with another station, keeping an eye on the bitrates and data services you are still responsible for might be of interest. You might even want coverage area verification instead of waiting for customer complaints to come in.

Qligent end-to-end monitoring block diagram. Click to enlarge.

Qligent end-to-end monitoring block diagram. Click to enlarge.

Visibility of the Cloud

By moving to a cloud monitoring and analysis system like Qligent Vision, DOEs can locate and connect probes that monitor all elements of RF performance. These remote deployed devices measure how well the monitored TV signals are being received across the distribution and delivery chain, and automatically return that data to a centralized server for cross correlation and analysis. This means staying on top of a signal fade or dropout issue as it occurs via system alerts; or recognition of concerning performance trends through detailed data analysis.

From the comfort of an office, a master control suite or even a home, a cloud-based architecture offers the depth required to centrally monitor more than just equipment health, but the actual stream such as how many bits are used between two or more stations sharing the same transmitter, filtering system and/or antenna; whether the encoder and multiplexer is performing for each station as expected; and whether forward error correction is optimized. Raw stream and decoded compliance recording are key features for quick and easy verification, troubleshooting and support. These are examples of how monitoring, analyzing and troubleshooting performance issues in repack scenarios will prove far less time-consuming for DOEs – especially in comparison to taking manual measurements in the field and physically correlating logs from different pieces of equipment.

Of course, the Spectrum Repack only requires a broadcaster to redo the RF stage of the TV production chain. The move from ATSC 1.0 to non-backwards compatible ATSC 3.0 will require many links to be replaced; if not the entire chain itself. This will require an upgrade of the entire headend, the scheduler and framer, and – perhaps most importantly – the IP link from the studio to the transmitter site, given the inherent IP networking characteristics of the ATSC 3.0 standard.

For a DOE, building an ATSC 3.0 production chain (while keeping the current ATSC 1.0 system in operation) is a gigantic challenge. In bringing ATSC 3.0 up to speed, the DOE must run the full gamut of quality monitoring issues; making sure the primary ATSC 3.0 video encoding, multiplexing, STL, and transmission systems are all working well together, in compliance with loudness and closed captioning regulations.

The bottom line: To ensure everything works as planned, DOEs will need to test, retest, and test again to make sure that their ATSC 3.0 systems are ready for primetime. Transitioning to a cloud-based monitoring and analysis platform will produce the results of these numerous tests in an extremely convenient manner, centralized and easy to access from any networked location. This reduces the DOE’s stress by spotting issues and preventing larger ones; all of which keeps complaints way down compared to the old manual methods of doing things. Start now! Baseline your current distribution and delivery, and have the oversight in place for Repack and ATSC 3.0.

Of course, there will always be challenges, and there will always be complaints – especially with transitions this enormous. Reducing the number and frequency of those complaints will ensure the DOE is plagued by far less FUD along the path.

Ted Korte, COO, Qligent

Ted Korte, COO, Qligent

You might also like...

Why AI Won’t Roll Out In Broadcasting As Quickly As You’d Think

We’ve all witnessed its phenomenal growth recently. The question is: how do we manage the process of adopting and adjusting to AI in the broadcasting industry? This article is more about our approach than specific examples of AI integration;…

Designing IP Broadcast Systems: Integrating Cloud Infrastructure

Connecting on-prem broadcast infrastructures to the public cloud leads to a hybrid system which requires reliable secure high value media exchange and delivery.

Video Quality: Part 1 - Video Quality Faces New Challenges In Generative AI Era

In this first in a new series about Video Quality, we look at how the continuing proliferation of User Generated Content has brought new challenges for video quality assurance, with AI in turn helping address some of them. But new…

Minimizing OTT Churn Rates Through Viewer Engagement

A D2C streaming service requires an understanding of satisfaction with the service – the quality of it, the ease of use, the style of use – which requires the right technology and a focused information-gathering approach.

Designing IP Broadcast Systems: Where Broadcast Meets IT

Broadcast and IT engineers have historically approached their professions from two different places, but as technology is more reliable, they are moving closer.