Vendor Content.

A Holistic Approach To OTT-Based Monitoring

OTT-based content delivery is quickly maturing as a standard for content delivery across all content types and platforms. In this article we look at what it means to monitor a supply chain when using OTT delivery as a backbone for broadcast content.

While there are several ways to measure system performance and viewer usage across disparate distribution platforms, monitoring telemetry across an Internet-based infrastructure takes on a whole new meaning with the convergence of OTT and ad-supported FAST channels. These new avenues allow us to grow audience, but they also shift several key metrics from the source or middle of the network to the edges.  For those content owners, it’s more than simply meeting regulatory or system performance mandates. They want to know they are meeting contractual obligations and gaining the most value from their inventory while delivering a quality of service and experience that meets the consumer expectations.

Monitoring is key to fulfilling consumer contracts. At a higher level, there are two ways to emphasize its important: There is the Quality of Service (QoS), which correlates with the overall quality and performance of the content distribution chain, and the parameters that must be achieved on behalf of content partners; and the Quality of Experience (QoE) that you must ensure for the consumer that pays for the content.

If these terms seem to overlap, it is because they do. QoS is an overall measure of the elements involved in the delivery process spanning all aspects of the delivery chain. QoE is related to QoS but is a measure of the targeted result as it pertains to the content itself. For example, it is perfectly possible to have poor QoE (subjectively and objectively) despite perfect QoS. In contrast, if you have poor QoS then all QoE, regardless of intent, can be impacted and may result in a poor consumer experience.

Conversely, the impact of QoE on QoS is best understood when we see incompatibility with the standards of distribution. Such incompatibility has to do with how the content itself is prepared and packaged, and therefore impacts elements of the delivery chain. An example would be poor encoding practices, which may measure well when analyzed in isolation, but can severely compromise QoS (and therefore QoE) as part of an overall delivery package. Consider targeted advertising which relies on a specific formatting of content so that the insertion of content from separate origins can happen seamlessly. The takeaway is that QoS and QoE are overlapping, complementary measurements that require separate consideration.

The process of ensuring successful content delivery has changed as distribution platforms proliferate and remains novel to many. We now need to monitor at the source to confirm that the content being propagated is what was contracted for delivery, and then reconfirm that the quality meets agreed-upon standards at the point of delivery. This is achieved by developing specifications that must be met, such as monitoring for the presence and quality of the video, or to confirm the right video or commercial is being served. Monitoring strategies also need to ensure that the business models of all partners are supported in the most efficient and multi-faceted way.

These attributes are often not monitored separately by separate organizations, but it is hard to drive accountability when the necessary telemetry is held by disparate parties. The right approach is to drive toward a solution that looks across systems and networks and that comprise the delivery chain. It should also be done in conjunction with monitoring the signals themselves, as each element is a key contributor to the overall success of the delivery itself, and ultimately the consumer experience.

The Changing Landscape Of Content Distribution

Typically we would look at content distribution using UDP-based networks and broadcast specific systems, and we would design, implement and operate unique, segregated networks with an extreme level of attention to detail to ensure that content is delivered accurately. Due to the nature of UDP and the number of networks and associated equipment involved, we are forced to measure and remeasure content at each stage of the delivery chain to ensure that it is being delivered properly and with full quality. This means touching, analyzing, or even fully deconstructing the content multiple times just to ensure it is delivered correctly.  This comes at a great expense.

Most of these new opportunities for content distribution center around switching from the UDP-based delivery systems we have used for years to those based on TCP. The nature and design of TCP provides a high degree of confidence that the data will arrive intact while introducing new challenges and opportunities for efficiency in scale and reach.

One caveat to consider is that TCP was designed to deliver packets of information without regard to timing. Fanout performance of TCP-based networks is immense when combined with http protocols and TCP is the most cost-effective, and scalable mechanism for delivering content but it still has challenges.

One of its well-known weaknesses is that the longer and larger the pipe, the more likely it is that the performance of the data delivered (bandwidth) will not be maximized.

Companies like Google and others have promoted protocols specifically to address latency, improving on the rate control mechanism inherent in the standard. However, the biggest impacts come from the advent of content delivery networks. CDNs solve part of this issue through the deployment of staged delivery systems with caching spread throughout the networks. This caching imparts latency of course, but it also has very strong benefits. Higher latency is traded for greater scalability, content flexibility, and more diverse resiliency and redundancy opportunities.

In effect, we trade the continuous and ongoing cost of monitoring UDP networks at low latency for the simplicity, scalability, and auditability of TCP whose reach is undeniably greater. Latency is not the bogeyman of TCP networks. Instead, it is simply a byproduct, a single element of a network topology which we must master to reap the benefits of a lower cost, more flexible, and more scalable distribution strategy.

The biggest challenge from a broadcast perspective isn’t latency, which exists in all systems. It comes from the requirements around timing and how we ensure that the public network can meet the demands of broadcast distribution while still operating as a public network in many cases. To maximize that opportunity (greater scale = more audience, at lower cost), we must start to treat our systems, signals and networks as one unified operational topology. This means, we must monitor the entire content supply chain from source to destination across all of its systems, and do so differently than we have in the past.

It’s time to embrace this approach in its entirety. We need not fear scale or functionality of the public networks; we need to embrace the potential of them instead. Only in embracing the capacity and capabilities of a TCP-based, OTT-empowered delivery network can we open the door to new approaches to high scale delivery, end-to-end auditability, and most critically, create an accountable approach to audience measurement that will give broadcasters the tools to compete with pureplay OTT systems.

All OTT systems are based on IP, and specifically TCP-based delivery.

Beyond “As Run” Certification

The move to TCP, and specifically OTT, as the backbone of the delivery process enables more than just more efficient monitoring. Our goal is audience reach, and the closer we move to the audience, the more important it becomes to understand all aspects of the delivery system. This shift in focus has huge benefits for the industry. Not only will it allow us to make better decisions, it also will allow us to flex with the whims of the audience. It will provide us with better tools, more control and greater flexibility whether our content is being delivered B2B or B2C. This new focus pivots around our current focus of “monitor by exception” to one that adds monitoring for success.

The pivot to ‘success’ as a model allows us to focus on flexibility, scale, and targeting of audience at every level. Understanding what systems are reliable, what networks work well, and what level of quality is ‘acceptable’ to viewers opens the door to understanding the formula for success.

There is much more, however, that can be achieved by making the switch. When the network is broken down into its constituent parts, there is some source of content, a mechanism for delivery (typically a physical network), and one or more targeted destinations. The more we move to cloud, the more we embrace public networks, and the more we understand the telemetry of the system the easier it is to make prudent business choices. Netflix and others have an inherent understanding of what works, so what doesn’t work becomes less of a focus. For broadcasters, this has large ramifications in how we think about organizing and delivering linear content. It all starts with our concepts of what constitutes a ‘channel’.

This previous point is especially true as media companies move closer to direct-to-consumer offerings. That makes it highly important to trace and audit the content at every point in the delivery chain. It’s the data that is produced by the entirety of the delivery chain that sets up the understanding of audience. When something does go wrong, you immediately know where, when, and which audience was impacted,. When something is going right, you know how to recreate the success necessary to reach the projected scale. Having the ability to audit your distribution means being able to prove delivery at any level of scale. You know exactly how successful you are because you know exactly who, when, and where content was delivered.

Careful monitoring allows us to prove that the content went where it was supposed to go, in the proper format and frame rate, and has the right levels of QoS and QoE. Leveraging the Internet, we can think of it as the ability to audit/monitor every piece of your network’s performance. The more we think about direct-to-consumer services, the more important monitoring every part of the chain becomes.

Also, the more detail we can extract from that monitored signal, the more potential value we can create. Broadcasters mostly rely on third-party monitoring services for audience metrics. But an OTT service knows exactly how many people are watching and, more specifically, who is watching the content or ads. That gives them more granular information to work with, which helps them make better business decisions, such as spinning up or down a channel or service quickly and with little effort.

A New Way To Monitor

The reason for this new type of TCP-based monitoring is that the topology of content distribution has become more dynamic and complex at the same time. This highly detailed level of monitoring provides perhaps the most valuable resource that going to drive your business: Our ability to make decisions based on actual knowledge of system, network, and signal performance together with discrete knowledge of user behaviors.

Another benefit of this more complex monitoring is efficiency. You really get to understand the true value of an asset when you can track it from source to consumption, and across all consumers simultaneously. The role of the monitoring system is to consume those data points and constantly update the operations center on performance of the whole network, exposing weak or strong points along the way on a constant basis. The ability to clearly understand when something happens, where it happens, and who it affects is key to success, the ability to operate with that clarity paves the way for further investment.”

Brick Eksten, CEO of Qligent.

Brick Eksten, CEO of Qligent.

To do this we must have access to the overall telemetry of the supply chain as a whole, we do that with probes and by leveraging APIs and the inherent data that’s available to us and by applying specific technologies such as those that are QoE focused when the available data simply isn’t good enough. As broadcasters, we have a more demanding view of what constitutes success in our content supply chain, and that means overcoming the limitations of traditional network and system monitoring.

Knowledge Is Power

The depth of monitoring is limitless, and there’s always more to know. A holistic approach to monitoring results in better control of the business. Traditional media needs to start thinking as Internet entrepreneurs instead of broadcast custodians. An Internet-based system makes it possible to monitor everything because it’s all built on systems designed to expose telemetry. Access to the data is inherent in the design and where it isn’t, and we apply broadcast specific technologies to achieve our goals.

Monitoring is the most under-valued business-related technology in television. As Internet-based content delivery systems become more complex, broadcasters can’t afford to do the kind of monitoring they were doing in the past. It needs to be more like a secure banking transaction, where every point of the delivery platform is closely watched and tracked.