Effective Monitoring Strategies Today And The Path Forward

The number of stream monitoring and data aggregation points continue to multiply, but there are increasingly efficient ways to proactively manage your QoS, QoE and compliance requirements.

The broadcast and production industry operate in a business environment consumed by data. We rely on data to analyze and understand the quality and performance of our content, and we rely on data to measure and monitor audience viewership and response. But how do we capture and leverage meaningful data in an increasingly complex web of physical layers, transport streams and delivery platforms all the way to the mobile device, or the set-top box/media player in the home?

Very carefully and strategically, of course. There are innumerable data points to capture and penetrate along the journey, and each point can provide valuable information to understand your signal performance and the ultimate effect on the viewer. This includes not only the network contribution and distribution of the video and audio content – which itself is originating from CDNs, headends, studios and other origination points – but also equipment health, stream latency, and SLA parameters, among other elements.

Building out this network provides value beyond the “here and now”. While reacting to the present provides value for long-term churn reduction, establishing a reliable, thorough, data-rich architecture will also provide operators with the power of predictive analytics. Specifically, how to foresee and prevent a common problem before it reaches a critical performance threshold, from pre-emptively rolling trucks to proactively alerting viewers.

Accumulation of Monitoring Points

One lesson we have all learned in the multiplatform world is the diverse range of contribution and distribution sources that figure into the mix. Traditional terrestrial, cable, and satellite sources remain a substantial part of the ecosystem. Contributed sources from remote studios and broadcast locations have multiplied. OTT and mobile/web complicate matters further, as these quickly multiply monitoring points, and require a stronger understanding of the CDN’s role in the matrix.

Underneath all of this is the need to understand the role that the physical and transport layers play from a service level, and how the performance of the technical infrastructure and separate components relate to quality of service, QoS. With QoS monitoring, we are generally looking to gather metrics that help operators understand the bit error ratio as they relate to RF and IP signals.

These metrics vary according to the source; for example, each terrestrial standard, be it ATSC, DVB-T, DVB-T2, and looking forward, ATSC 3.0, offer unique modulation rates and characteristics. With IP, issues related to latency and dropped packets will vary based on the reliability and capability of switches that signals pass through. Understanding QoS through the physical and transport layers, and gathering metrics across average bitrates, delay factors, jitter, inter arrival times, and modulation rates will ultimately help the broadcasters or service provider reconcile the ratio of good bits to total bits to determine QoS performance.

The quality of experience, or QoE is where the viewer impact takes shape. QoE-related performance issues need to be recognized quickly. The pertinence for the service provider is a quick ability to pinpoint the exact point of an error and measure its impact on the viewer. There are additionally innumerable other problems that can create QoE issues, even without errors in the program delivery, such as poor bandwidth or aspect ratio management.

There are three general layers to monitor for QoE performance, each with its own distinct set of troubleshooting parameters:

  • Video Layer: Problems in the video layer correlate to compression or data loss macroblocking, frozen screens, black video, lack of decodable video, or lack of any signal whatsoever.
  • Audio Layer: Common problems in the audio layer correlate to silence, lack of decodable audio, or lack of any signal whatsoever. Loudness is perhaps the biggest concern, particularly as it relates to compliance monitoring for various standards (BS,1770-3, EBU R 128, and ATSC A/85).
  • Data Layer: Closed captions (608/708), DPI triggers (SCTE 35/104) and ratings (such as Nielsen watermarks) are common issues associated with the data layer. Closed captions, along with audio bars, also must be burned into recording operations for proof of compliance. There is also concern about the quality of decoded metadata, and the overlay of PIDs, codecs and bitrates, for example.

There continues to be a broader scope and scale required to effectively monitor QoS and QoE considering the quickly multiplying rate of stream monitoring points across multiple platforms. And these points expand well beyond the broadcaster or service provider’s home market; more often than not, these customers need a way to economically monitor all of their streams and channels across countries and continents, if not the entire world.

Solving the Problem

Qligent first brought its Vision platform to market five years ago to address these changing requirements in QoS, QoE and compliance monitoring. This required a reasonable business model that would help users migrate from the increasingly complex web of point—based monitoring to a flexible architecture that can proactively aggregate relevant information over multiple points.

That architecture also has increasingly required a way to do this in real-time to address the growing challenges related to QoE. It means understanding how to aggregate information from various locations, and thoroughly investigate the conditions that cause viewing issues at the home or on the consumer’s mobile device of choice. From there, the problem needed a way to quickly pinpoint problem areas by correlating various data sources and understanding trends that could predict future occurrences of the same problems.

Vision’s initial architecture offered a clear roadmap for this migration. The goal was to move from being primarily focused on compliance monitoring (captions, loudness) and limited QoS/QoE monitoring, that leveraged mostly on-premise probes that missed important data points along the broader contribution and distribution network. Vision provided a path toward a hybrid solution that leveraged probe points on-premises and in the cloud, allowing operators to aggregate information from any location.

Off-the-shelf, low-cost servers were leveraged to aggregate data across on-premise and cloud probe points, introducing the freedom of multi-location correlation, analysis and reporting. The low-cost server and networked probe architecture also opened the door for quick and lower cost scalability, allowing the broadcaster to add new probes and data points for aggregation as their contribution and distribution networks grew.

Today, the hybrid system, leveraging cloud and on-premise technologies, workflows and processes – will deliver the highest efficiency, most detailed results, and strongest value proposition to analyze and understand the file-to-last-mile performance of the operator’s system – and properly react in a way that maintains viewer satisfaction and reduces customer churn often associated with QoE performance. This not only covers traditional broadcasters and service providers to TVs and set top boxes, but also helps to solve the greater challenges of monitoring QoE on mobile devices.

A data-oriented diagram of the Vision Analytics dashboard.

A data-oriented diagram of the Vision Analytics dashboard.

Greater Opportunities Ahead

As we look forward, Qligent sees an opportunity to transition monitoring and analysis fully to the cloud – along with leveraging richer data sets through machine learning and big data – both key elements of Qligent’s Vision Analytics service. Together, Qligent’s Vision deployment in the cloud, along with Vision Analytics, will bring the added value of churn prediction, the identification of silent sufferers, service level agreement fulfillment, and infrastructure maintenance through a richer set of KPIs and KQIs customizable to each operation.

The monitoring architecture is increasingly akin to an onion, with multiple layers to peel away to get to the root of a problem. Moving to a hybrid model that sets the user on a path toward a cloud-based system with richer data analytics will help them begin the process of reducing infrastructure, scaling their monitoring points, and introducing cutting-edge, machine-learning applications, such as predictive analytics, to simplify their operations and reduce costs. The ability to turn sites on and off quickly also opens the door for more cost-efficient event-based monitoring as needed.

As a pioneer and early cloud technology adopter in this space, Qligent predicted the trend of transitioning from point-based to cloud monitoring. As more broadcasters successfully transition to cloud or hybrid production, many of these organizations are adopting hybrid monitoring systems as the next step toward a comprehensive cloud migration.

The possibilities are endless as the monitoring universe and technologies continue to evolve, and the above is only a primer around what a hybrid (and future cloud) system with strong data analytics can offer. 

You might also like...

Brazil Adopts ATSC 3.0 For NextGen TV Physical Layer

The decision by Brazil’s SBTVD Forum to recommend ATSC 3.0 as the physical layer of its TV 3.0 standard after field testing is a particular blow to Japan’s ISDB-T, because that was the incumbent digital terrestrial platform in the country. C…

Broadcasting Innovations At Paris 2024 Olympic Games

France Télévisions was the standout video service performer at the 2024 Paris Summer Olympics, with a collection of technical deployments that secured the EBU’s Excellence in Media Award for innovations enabled by application of cloud-based IP production.

Standards: Part 18 - High Efficiency And Other Advanced Audio Codecs

Our series on Standards moves on to discussion of advancements in AAC coding, alternative coders for special case scenarios, and their management within a consistent framework.

HDR & WCG For Broadcast - Expanding Acquisition Capabilities With HDR & WCG

HDR & WCG do present new requirements for vision engineers, but the fundamental principles described here remain familiar and easily manageable.

What Does Hybrid Really Mean?

In this article we discuss the philosophy of hybrid systems, where assets, software and compute resource are located across on-prem, cloud and hybrid infrastructure.