Monitoring An IP World - OTT - Part 2

In this second instalment of our extended article on monitoring in OTT and VOD, we take a look at the core infrastructure and discuss how to analyze systems to guarantee that video, audio and metadata is reliably delivered through network and CDN infrastructures to the home viewer.



This article was first published as part of Essential Guide: Monitoring An IP World - OTT - download the complete Essential Guide HERE.

Distribute Servers
One method that will significantly improve efficiency is to move the origin servers closer to the viewers and the edge servers, in effect, provide this solution. Situated in the ISPs (Internet Service Providers), the edge servers provide the multi bit rate streams, manifest and other housekeeping files needed to allow systems such as DASH and HLS to operate. Now, a single encrypted video and audio stream is distributed to the edge servers to reduce the load on the origin servers and the internet backbone.

We must remember that the video and audio being streamed is not a continuous data stream as in the traditional broadcasting sense. That is, the streams are small packets of video and audio that are sent using the TCP protocol to reliably deliver the data to the end receiver. Without this there would be no error correction and data would probably be lost.

One of the consequences of packetizing the data is that it must be buffered throughout its transmission. This adds some latency but more importantly is a source of potential buffer overflow and underflow resulting in lost packets. The packets can usually be recovered through the operation of TCP. It’s not much of an issue if this happens occasionally but is more of an issue if buffer anomalies occur regularly.

Figure 2 – each broadcast service is transcoded to provide multiple bit-rate streams resulting in six more video and audio streams for DASH and HLS type services. If this is streamed over the internet, additional data is unnecessarily streamed resulting in inefficient use of the internet and potential congestion. To avoid this, the transcoding function is placed at the edge servers.

Figure 2 – each broadcast service is transcoded to provide multiple bit-rate streams resulting in six more video and audio streams for DASH and HLS type services. If this is streamed over the internet, additional data is unnecessarily streamed resulting in inefficient use of the internet and potential congestion. To avoid this, the transcoding function is placed at the edge servers.

OTT distribution is further challenged when we consider VOD and +1hr services. To overcome network congestion and overloading the origin servers, the assets associated with these services are also placed in the edge servers. The edge servers still request information from the origin servers but as they cache the video and audio their requests are significantly reduced.

Again, it’s worth remembering that the CDN doesn’t just define a high-capacity network link but also includes the storage servers, transcoders and packetizers. Even from this simple example, it can be seen that although the introduction of the CDN has greatly improved the efficiency of the OTT distribution and quality of experience for the viewer, there is a price we pay for this increased system complexity.

Monitoring Necessities
Monitoring brings order to complex systems. Through monitoring we can better understand what is going on deep within a system. This is even more important in OTT as CDNs, ISPs and networks are often provided by different vendors. CDNs share their network bandwidth and infrastructure with several clients. Although data-rate shaping potentially protects clients from the effects of bursty data from other contributors, there is still the possibility that one client may use more than their unfair share of capacity, resulting in lost packets and a break-up of their service.

As we move to OTT it soon becomes evident that monitoring has significantly moved on from just confirming the video and audio meets the relevant specifications. We must now consider the transport layer too including the IP protocols. We have done this in the past as RF can be considered a transport layer, the difference now is the complexity involved at both a system level and data-link level stemming from a plethora of options for OTT protocol types and audio/video codecs.

If a broadcaster starts receiving reports of a poor quality of service in a particular region, then they could justifiably assume that a problem has occurred in a specific feed from a CDN. Placing monitoring before and after the CDN would confirm where the problem is occurring. It might also be the edge servers causing problems, but the broadcaster will be able to quickly see if the CDN to the edge servers is correct or not.

More Than Video And Audio
Analyzing the validity and frequency of the manifest and housekeeping files is critical to making sure a viewer can watch their program. Without the manifest files the viewers device will not know where the variable bit rate streams are and consequently will not know which stream to select resulting in the viewer not being able to watch their program.

Installing monitoring probes deep inside the CDNs would provide reliable feedback of the inner workings of the CDN thus helping the broadcaster quickly find any issues with their feeds. This provides distinct advantages for both the CDN provider and the broadcaster. It’s entirely possible that something could have gone wrong at the broadcasters end and the CDN provider is being presented with data that cannot be displayed on the viewers device. Knowing this would be extremely useful.

Adding centralization to the monitoring further improves the efficiency of the system. Probes strategically placed deep inside the OTT network as well as within the broadcast facility can all be connected together. Not only does this provide a centralized monitoring facility but it also gives the management software the opportunity of comparing the measurements of all the other probes in the system.

Collaborative Monitoring
Centralized aggregation, analysis, and visualization of monitoring data in a distributed system helps broadcasters understand problems that may be occurring as well as issues that have yet to materialize but are in the process of emerging. For example, the data rate of a link between an origin server and edge server may increase even though the amount of streaming content has not increased. This could indicate large packet errors due to the number of resends.

OTT systems have delivered unparalleled levels of service for viewers. To achieve the high quality of service viewers not only expect, but demand, has resulted in OTT systems becoming incredibly complex. This is further exasperated by the number of vendors and service providers involved in an OTT broadcast chain.

To help make sense of this complexity broadcasters must not only understand the intricacies of OTT playout, such as CDNs, but must also invest heavily in connected monitoring systems to help them understand where issues effecting quality of service are either materializing, or about to materialize.

Supported by

Broadcast Bridge Survey

You might also like...

Minimizing OTT Churn Rates Through Viewer Engagement

A D2C streaming service requires an understanding of satisfaction with the service – the quality of it, the ease of use, the style of use – which requires the right technology and a focused information-gathering approach.

Designing IP Broadcast Systems: Where Broadcast Meets IT

Broadcast and IT engineers have historically approached their professions from two different places, but as technology is more reliable, they are moving closer.

NAB Show 2024 BEIT Sessions Part 2: New Broadcast Technologies

The most tightly focused and fresh technical information for TV engineers at the NAB Show will be analyzed, discussed, and explained during the four days of BEIT sessions. It’s the best opportunity on Earth to learn from and question i…

The Big Guide To OTT: Part 9 - Quality Of Experience (QoE)

Part 9 of The Big Guide To OTT features a pair of in-depth articles which discuss how a data driven understanding of the consumer experience is vital and how poor quality streaming loses viewers.

Chris Brown Discusses The Themes Of The 2024 NAB Show

The Broadcast Bridge sat down with Chris Brown, executive vice president and managing director, NAB Global Connections and Events to discuss this year’s gathering April 13-17 (show floor open April 14-17) and how the industry looks to the show e…