Navigating the Many Layers of the ATSC 3.0 Ecosystem: Part 2

Part One of this two-part series explored the various layers and protocols of ATSC 3.0 that broadcasters must understand to take full advantage of the opportunities available through the technology. This second and final installment explores best practices for signal verification and compliance across the ATSC 3.0 ecosystem.

The development of ATSC 3.0 recommended practices work is in process for verification and compliance. At this point in its development, it remains focused on RF signal layers. For example, A/325 summarizes processes to test RF performance in a lab environment, A/326 presents objectives and general methodology for testing RF performance in the field.

There is no question that a set of best practices for the overall ATSC 3.0 system will provide great value for broadcasters. End-to-end verification implies that all layers and all components must be evaluated as a whole rather than individually.

Some monitoring points will be the same, such as knowing whether what is leaving the plant, what is received at the transmitter site, and what goes out over the air is good or bad. Unfortunately, due to the high degree of layers, features and dynamic configurability, you can no longer look at these three points in isolation. If deploying some of the advanced features such as Audience Measurement, Ad Insertion, Second Screen, SFN, and/or Channel Bonding, the broadcaster may also need to add last mile and cloud monitoring points to the analysis mix.

Complex Tools Analyze a Complex Format

Many of the Quality of Service (QoS) and Quality of Experience (QoE) parameters are the same and can be meaningful independently, provided the operator is aware of what is being monitoring. For the most part, having a tool to compare and contrast all signal layers at multiple points along the path, in real time, becomes much more insightful and alleviates the need to be an expert in every detail. This capability will also enable immediate cause and effect when making configuration changes and baselining operations.

As experienced with the analog-to-digital conversion, broadcasters discovered a new set of measurement tools known as QoS, QoE or Transport Stream Analyzers. The level of capability and complexity of ATSC 3.0 Next Gen TV will require even more sophisticated tools.

The following are potential issues with the early deployment of ATSC 3.0 and some approaches to quickly resolve and mitigate the risk of reoccurrence. The system for discussion consists of two probes with multiple inputs: one for each physical location, and one aggregation server to collect, correlate, and analyze all the inputs. The next step is to choose a tool that would allow for easy addition of additional transmitter sites for centralized monitoring and a larger data set for correlation and analysis.

Figure 1: A multipoint monitoring system. Click to enlarge.

Figure 1: A multipoint monitoring system. Click to enlarge.

Studio and Transmitter Site Outputs

The first logical point to monitor and analyze an ATSC 3.0 signal is at the RF output, but verifying pictures and levels isn't enough. It is important to analyze all the layers to verify a fully-decodable signal. Look for timing and synchronization abnormalities even if the signal appears decodable. The most logical approach to do this is to compare the RF output to the output of the broadcast gateway or scheduler/framer, as shown in Figure 1 at Point 3.

Comparing the two signals will expose timing, latency, missing or misaligned objects or incomplete streams, verify bitrates, control and signaling information. Multi-layer analysis will also expose the impact these types of issues have to the decoded media. Not all discrepancies or problems will cause a poor viewer QoE experience, but it is good to know where and when you cross this line. This level of monitoring is analogous to knowing what leaves the plant is good and what goes over the air is good.

If the vendor equipment in the chain supports a remote monitoring interface, such as Simple Network Management Protocol (SNMP) messaging, it is recommended to also include monitoring of Points 2, 7 and 8 in Figure 1 as part of the overall analysis set. This will correlate the signal input and output of a particular piece of vendor equipment with the vendor reported information, thus giving a second opinion or another data point in the event of a problem for faster root cause analysis or false alarm detection. If in a perfect world and everything looks good here, all is set. If a broadcaster sees problems, abnormalities, or inconsistencies, there will be a need to dig deeper.

With ATSC 3.0 capabilities, broadcasters will need to take both a micro-and macro-scopic view of signal delivery. Monitoring signal health throughout the delivery chain will be key to ensuring a high QoS. Click to enlarge.

With ATSC 3.0 capabilities, broadcasters will need to take both a micro-and macro-scopic view of signal delivery. Monitoring signal health throughout the delivery chain will be key to ensuring a high QoS. Click to enlarge.

Sources and Inputs

The next logical point to monitor In Figure 1 is Point 1, where the input interfaces to the broadcast gateway. This will provide verification of the output of the encoder and/or segmenter/packager independently, but also expose structure and configuration data for interoperability.

Encoding issues are an easy and obvious case for a QoS/QoE system to identify the root cause for errors. Some tools, such as QligentVision’s Match, can compare the video into the encoder with the video out of the transmitter RF to isolate programmatic issues. It can also expose content insertions, aspect ratio conversions, and embedded metadata such as watermarks or triggers; and can perform analysis on the percent quality degradation, latency thru the system, and offer a host of useful analysis to see what was changed from the source streams.

Studio to Transmitter Link

On Figure 1 at Point 6 is the output of the STL link measured at the input to the exciter. If available via SNMP, include Points 4 and 5 from Figure 1 to collect data from the STL vendor to correlate with the signal and other vendor equipment findings.

Another valuable piece of information that is quick to obtain is to compare Figure 1 Points 3 and 6, as they should be identical STLTP streams. Trending this data over time will profile the link health, especially useful if it is over a shared or public network. The STL will use the SMPTE 2022-1 FEC so keep an eye on the FEC packets recovered and other parameters to see how clean the link is between sites. The signal could be getting through fine, but the STL error correction could be working overtime to deliver.

This Qligent Vision display screen illustrates how multiple key parameters can be monitored simultaneously. Any signals operating outside of user-set limits appear in read. Click to enlarge.

This Qligent Vision display screen illustrates how multiple key parameters can be monitored simultaneously. Any signals operating outside of user-set limits appear in read. Click to enlarge.

System Analysis Features 

General features of a good end-to-end monitoring and analysis system consists of intuitive visualization, comprehensive analysis, and actionable reporting.

Visualization is key to a quick understanding of a system's overall health and performance. Dashboards and drill downs, with sortable and filterable alarms, multi-layer correlation and other visual explanations of the system are required in real-time along with historic data. Such parameters should be customizable for each user so they can focus on their areas of responsibility.

Analysis tools are important to help a user investigate the system from multiple perspectives. The operator needs to be able to identify the root cause of errors such as loss of captioning or signaling failure. A measurement system should be able to tell the operator if the errors are one-of or based on a ripple effect. Finally, the analysis platform should be able to offer predictive data based on observations and trend identification. 

When collecting data from multiple signal points and status from multiple vendors' equipment, it is also necessary to have a tool with good data aggregation capabilities. Recording of raw transport and/or packet capture (PCAP) is vital for sharing information with colleagues and vendors for conducting secondary or post analysis.

Reporting features should cover a wide range of capabilities from notifications, to actual recordings to generating full reports. Notifications are important for management by exception and need to be customizable for the usual visual and audible alarms, along with emails and text messages. Finally, reports should be able to provide machine-to-machine data for external systems such as to Network Management System (NMS) or an Operations Support System (OSS). Automated raw and consolidated reports with exports to spreadsheet or document forms are nice for cross department and upper management reporting accompanied by associated recorded stream segments.

Long-term trend analysis for all parameters is an extremely valuable tool. There is no worse problem to resolve than the one that shows up every couple of weeks when no changes were made in either configuration or operation. Another way to mitigate long hours of troubleshooting these types of problems is where the monitoring system can trend and correlate several layers of the signal across several points of data. Abnormalities will naturally pop out and having recordings associated with the time of the error will help the engineer quickly resolve the situation.

When tied to an ingest/playout automation system, a monitoring system an inform the operator of incorrect or missed feeds or advertisements. Click to enlarge.

When tied to an ingest/playout automation system, a monitoring system an inform the operator of incorrect or missed feeds or advertisements. Click to enlarge.

Adding any new features of ATSC 3.0, especially the interactive ones, will create a whole new set of problems in the timing and control. Imagine the hybrid mode use case where the user’s player will be switching from broadcast to broadband and vice versa as in the case of mobile ATSC 3.0 receiver. Support for this handover requires the monitoring of certain key signaling. Once these advanced features are deployed, adding monitoring at Figure 1 Points 10 and 11 will certainly close a bigger loop for a much safer, faster, and less painless transition.

Timely Investments

ATSC 3.0 is clearly structured to be a game changer for broadcasters that are fighting for viewer share in an increasingly fractured television marketplace. With the promise of a new dawn comes great responsibilities to learn, operate and maintain an infrastructure that will be unfamiliar to broadcasters in many respects. As there are so many new moving parts across standards, streams and parameters built into ATSC 3.0, having a head start on your monitoring strategy will go a long way in solidifying the overall health and performance of your over-the-air system without having to become an expert in every new standard.

The intrinsic IP networking capabilities of ATSC 3.0 makes moving to an IP and cloud-based system simultaneously a smart and timely investment. An end-to-end or holistic system approach makes future configuration changes less risky as cause and effect throughout the system will automatically be captured. Such a system approach will close the signal monitoring loop that runs from the content source to the RF output, and will eventually include on to the viewer's receiver.

Ensuring that signals are being aired as intended across the many potential signal paths (RF, Cable, OTA, IP, OTT) is of critical importance. The ability to visualize, analyze, and report on the quality of the required signals (be it for experience or service) allows broadcasters to maximize the business capabilities of ATSC 3.0. The multi-point signal monitoring, big data analytics, and depth of troubleshooting enabled in a system like Vision will pay dividends from the moment a station’s ATSC 3.0 content delivery system goes on the air.

Part 1 in this two-part tutorial can be found here.

Ted Korte is Qligent COO.

Ted Korte is Qligent COO.

You might also like...

NAB Show 2024 BEIT Sessions Part 2: New Broadcast Technologies

The most tightly focused and fresh technical information for TV engineers at the NAB Show will be analyzed, discussed, and explained during the four days of BEIT sessions. It’s the best opportunity on Earth to learn from and question i…

Standards: Part 6 - About The ISO 14496 – MPEG-4 Standard

This article describes the various parts of the MPEG-4 standard and discusses how it is much more than a video codec. MPEG-4 describes a sophisticated interactive multimedia platform for deployment on digital TV and the Internet.

Chris Brown Discusses The Themes Of The 2024 NAB Show

The Broadcast Bridge sat down with Chris Brown, executive vice president and managing director, NAB Global Connections and Events to discuss this year’s gathering April 13-17 (show floor open April 14-17) and how the industry looks to the show e…

NAB Show 2024 BEIT Sessions Part 1: ATSC 3.0 And TV RF

A full-time chief engineer in good relationships with manufacturer reps and an honest local dealer should spend most of their NAB Show time immersed in BEIT sessions. It’s an incredible opportunity to learn from and personally question indisputable industry e…

5G Broadcast: Part 6 - Technical Dive Into 5G Broadcast & New 3GPP Standards

Standards bodies and mobile technology developers are putting the finishing touches to 5G Multicast and Broadcast. These include enabling seamless switching between unicast and multicast, and equally transparent roaming for users as they move between mobile cells. There is also…