Early Issue Detection Through Intelligent Global Monitoring is Key to Greater Security and Service

It is well known that security breaches cause much less revenue loss or reputational damage if they are caught early. They are simply cheaper to fix at that stage.

Steve Christian, SVP of Marketing, Verimatrix

Steve Christian, SVP of Marketing, Verimatrix

The same holds true for hardware faults, software bugs or other issues that compromise performance and QoS (quality of service). While this is equally true for just about all industries, it has proved particularly challenging in the case of pay TV and commercial broadcasting to detect threats or performance issues early enough to minimize impact on the service, its users, or the revenue of the operators.

One reason is that security has long been an arms race with new piracy and theft of service threats emerging as fast as known ones are countered, such that operators with hardware-centric security solutions have historically tended to be in a reactive mode. Security vendors like Verimatrix with software centric approaches often have better options to proactively make changes and protect against known threats while keeping loss of revenue to a minimum.

Inevitably though we along with our customers have also had to invest in countering threats as quickly as possible after they have occurred. The same is true for issues that threaten service continuity or quality, especially for OTT video services delivered over an unmanaged network, where user experience has become a major concern and target for competitive differentiation.

The goal is to get to the point where our efforts are focused almost entirely on proactive threat management so that reactivity and crisis management are reduced to an absolute minimum. Fortunately, the continued expansion in network capabilities and bandwidth, combined with the way revenue security has evolved, has given us a great chance to erect a new level of defense against pirates.

The underlying point here is that revenue security technology has become more deeply embedded than ever before in the delivery networks of pay TV operators worldwide—to the extent that it has become a highly valuable source of data for monitoring not just security but also a host of other performance and customer-related issues. For this reason we have identified an unparalleled opportunity to extend the service we offer beyond security to embrace service continuity and quality, as well potentially as other applications of Big Data analytics. This has more recently resulted in partnerships with a number of analytics companies to enhance our collective offerings and develop secure analytics solutions for our mutual customers.

In order to make this all possible we first had to develop a global platform for securely collecting and monitoring data from VCAS deployments. We unveiled our Perspective Intelligence Center, which enables such centralized monitoring, last year and it is now offered as an upgrade option to our customer base of about 850 operators across a broad spectrum, from leading multinational Tier 1s down to smaller local service providers representing more than 100 million client devices.

The size and breadth of this base is naturally a major source of Verspective’s potential since it means we have the potential to build a representative snapshot of the operational and threat status for a significant segment of global video service activity at any given time. It is particularly valuable in gaining early warning of attacks or other issues so that there is more time to take preemptive action. But this also means that our customers must be convinced that their data, some of which is confidential and subject to privacy constraints, is safe in our hands.

Now, in order to aggregate data, we are asking operators to allow us to drill a hole through their own protective firewalls in order to extract information for our global monitoring platform. Of course our first task was to ensure that this extraction process did not introduce any new vulnerability to outside attack. Above all though we had to ensure that all this was not in vain and that this data collection would make a major contribution to collective security from which each operator would benefit individually.

 Verimatrix’ Verspective monitors data from 850 operators worldwide and 100 million devices.

Verimatrix’ Verspective monitors data from 850 operators worldwide and 100 million devices.

There was also the issue of trust, which fortunately was not a significant obstacle since operators choose their security providers on that basis in the first place. Indeed global monitoring is really just an extension of the existing proven trust model between operator and security provider.

A natural question we were asked during the development and subsequent promotion of our monitoring is exactly what data we would be collecting and what benefits derive from that. In fact both the volume and variety of that data has increased steadily as we have become more deeply embedded in our customers’ networks and the same would apply for our major competitors.

Client devices fall into four main categories, set-top boxes, Internet-connected TVs, PCs/Macs and mobile devices, which are predominantly either iOS or Android. These devices feedback data about authentications, client activity and potentially QoS, which is especially relevant for the mobile devices and OTT services. This can help operators improve the quality of experience and at the same time derive valuable information about usage that can both feed marketing programs and improve recommendations. This can include details such as channel changes under IPTV multicast that are otherwise beyond the operator’s view.

Although we don’t monitor the transmission network directly we can do so in combination with our partners as part of end-to-end management of QoS. For example one of our partners, monitoring specialist IneoQuest, deploys appliances or probes in the network to measure QoS, but to do that effectively they need to detect individual content during transmission. The problem is that the premium content for which QoS is most important is invariably encrypted to protect against piracy and unauthorized access without paying. Verimatrix comes in by decrypting the content that we are protecting, so that IneoQuest can analyze the flows with its probes and provide valuable information about QoS.

Then at the head-end we obtain information about license usage as well as high level faults and general payload consumption data. This enables the operator to be proactive in responding to operational issues such as a particular system reaching its capacity, which we can detect by tracking utilization of resources and hardware such as CPU, RAM and hard drives. We can also eliminate various problems like expiry of licenses for software subsystems. Such occurrences might seem trivial but can erode the quality of experience for users and even trigger churn. With a centralized monitoring platform many of these events can be forestalled without human intervention, with ever higher levels of automation.

By aggregating data from so many operators across our customer base we can start to build a service that provides early warning of threats and issues to a much greater extent than has been possible up till now. By deploying automated machine based surveillance with a capacity to learn what data signatures are indicative of impending problems, it is then possible to identify sophisticated correlations that indicate an attack may be imminent, or that some threat to service continuity is developing.

Many of these problems cannot be so readily identified and diagnosed successfully through the traditional local monitoring ecosystems that most operators still have. Such local monitoring is also labor intensive and time consuming since operators need someone observing the monitoring system continuously and then they still have to contact Verimatrix’ support to fix any problems.

Our approach provides the platform for more automatic support so that any subsequent operator intervention has a much greater likelihood of analyzing what is wrong very quickly. Our Intelligence Center decreases the local effort required substantially through continuous system surveillance combined with the automatic case generation.

This centralization of monitoring also means that comprehensive historical data spanning multiple regions, domains, use cases and service categories can be obtained and aggregated across multiple operator domains. Therefore, subject to rigorous enforcement of privacy controls, there is growing opportunity for operators to deepen the analysis they perform in order to improve their overall offerings. Then by harnessing the very detailed data that they can get from their own security components they will be better placed to deliver compelling personalized and individually tailored content experiences that drive customer engagement and reduce churn.

We see this as just the beginning in a new era where previously independent parts of the video ecosystem converge to improve quality of service in diverse ways, while reducing the burden of maintenance and support for operators through greater levels of automation. 

You might also like...

Minimizing OTT Churn Rates Through Viewer Engagement

A D2C streaming service requires an understanding of satisfaction with the service – the quality of it, the ease of use, the style of use – which requires the right technology and a focused information-gathering approach.

Designing IP Broadcast Systems: Where Broadcast Meets IT

Broadcast and IT engineers have historically approached their professions from two different places, but as technology is more reliable, they are moving closer.

Network Orchestration And Monitoring At NAB 2024

Sophisticated IP infrastructure requires software layers to facilitate network & infrastructure planning, orchestration, and monitoring and there will be plenty in this area to see at the 2024 NAB Show.

Encoding & Transport For Remote Contribution At NAB 2024

As broadcasters embrace remote production workflows the technology required to compress, encode and reliably transport streams from the venue to the network operation center or the cloud become key, and there will be plenty of new developments and sources of…

Standards: Part 7 - ST 2110 - A Review Of The Current Standard

Of all of the broadcast standards it is perhaps SMPTE ST 2110 which has had the greatest impact on production & distribution infrastructure in recent years, but much has changed since it’s 2017 release.