The Sponsors Perspective: Social Revolution Highlights Need For Effective Media Monitoring

When broadcast TV was the only media consumption option available to consumers – video monitoring was regarded as a luxury. Today it is seen as an essential requirement in all forms of media content delivery.

This article was first published as part of Essential Guide: Monitoring An IP World - OTT


There has been a constant rise in the uptake of over-the-top (OTT) streaming services and with this comes an essential requirement to monitor, assess and trouble shoot content distribution networks (CDN) from the broadcast production center through the content distribution network.

This rise in demand has been constant, however the coronavirus pandemic has shone a new and intense light on this area. We live in unprecedented, uncertain times. The availability of reliable live OTT streaming services has become a critical requirement of our socially distanced lives as we fight the pandemic. And yet broadcasters, telecoms carriers and network operators are increasingly challenged by their content distribution network’s capacity and the uninterrupted availability of the content. In this environment, reliable video monitoring is essential to locate and remedy distribution issues.

This is more than just a short-term anomaly: this social crisis could lead to long-term changes in workflow and behavior, with many organizations developing new business strategies that see them leveraging cloud-based technologies. These changes will have a profound effect on people’s working practices and many organizations’ capital investment policies.

Demand for OTT streaming services has grown exponentially. Network operators have admitted to experiencing capacity issues, while Netflix has resorted to streaming only standard definition content to save bandwidth capacity. Against this background, we are advising customers to urgently review their video monitoring capabilities. What monitoring resources do they currently operate, where are the areas of relative weakness in their monitoring networks, and what combination of monitoring technologies best meets their current and future operational needs?

The rise (and rise) of OTT services highlights an increasing need for virtualized monitoring capabilities since these can reach out of the media production center and throughout the content distribution network. With some of Rohde & Schwarz’s larger broadcaster customers, they require more than one CDN provider to support their OTT streaming operations. In this situation monitoring is even more important since users can monitor and record the operating efficiency of each infrastructure, identifying exactly where a fault or bottleneck occurs within a content distribution network.

Such is the efficiency of virtualized monitoring that within live OTT streaming networks, it can pin-point and trouble-shoot an operational issue before it becomes a problem that consumers are aware of. Another advantage of the fully virtualized solution is that zero CAPEX investments are required, and operators can flexibly adapt to their requirements and situation.

PRISMON Media Monitoring – Evolution Not Revolution

Media monitoring has been a key focus for us for many years. Our PRISMON suite of systems represents more than 10 year’s development by a talented team of software and hardware engineers. Naturally the first generations of PRISMON were on-premise installations, more recently we have seen an increasing need for more dynamic virtualized monitoring facilities.

We introduced a cloud-based virtualized media monitoring platform – PRISMON.cloud – which enables content owners and network providers to monitor their data streams far beyond the broadcast production center into the consumers‘ homes. This resource provides reassurance: that consumers are enjoying a premium QoS and when faults occur, they are identified and located immediately.

Virtualization is a key technological evolution focus and it will empower many broadcast and media organizations to work differently, more efficiently and more profitably. But, this is not the answer for every scenario, in every scale of operation. Before embarking on a voyage towards virtualization, the user must ask themselves some fundamental questions.

Firstly, does the user prefer flexibility in their production and distribution workflows, which is one of the key advantages of a virtualized environment. Or do they prefer operational efficiency – especially in 24/7 media processing operations? Quite simply, in any case a software-based solution running on a bare-metal on-premise installation is the preferred way to do a future proof setup. 

Also, if broadcasters are migrating from a broadcast specific standards and interface landscape to a generic IT/IP environment, then the migration has to be complete in order to realize the potential benefits of this new environment. However, life is never black and white – there are many shades of grey and this is where the strength of the relationship between the user and technology partner becomes so important during the transition phase.

For example, the development of a SMPTE 2110 standard on flexible COTS (commercial off the shelf) standard IT hardware requires the skillsets of two different types of people – Hardware near SW engineers that understand the way hardware works and can tune the software to get the maximum hardware performance; and software developers that implement the extended monitoring and analyzing feature set that is possible on the standard IT hardware at reasonable costs.

Our company has built our reputation over many decades on our hardware near SW engineering capabilities. We have also developed skillsets that are valuable when we address the needs of virtualized workflows.

With all these skillsets, we can advise customers: both advice and support throughout the signal processing chain and also on how the virtualized infrastructure is being set up. In order to maximize both latency and data throughput and stability, we can optimize the data processing throughput in a virtualized framework.

The question is how does a broadcaster create bridges to that asynchronous IT/IP world in ways that enable their viewers to consume AV content in a manner they are familiar and happy with? It requires specialist knowledge and skillsets, but the big challenge is to customize a virtualized environment to the specific needs of that user.

Third-Party Interoperability In Virtualized Environments

Virtualized environments will never be limited to just one vendor’s products but will be a basket of several different products from a range of companies. This requires a great deal of interactivity and interoperability within the broadcaster’s workflows. In this environment things will inevitably go wrong from time to time. It is essential that the broadcaster has a strategy in place for when this occurs so that the impact is minimized. IP and Software based systems give us a great toolset to ensure the stability of the workflow with enhanced redundancy solutions developed in the last 10 years.

There is another important area where your technology partner should provide advice and support. In an ideal world, the user will test any new software’s ability to operate effectively within their virtualized environment, but this is not always possible. In this situation, standards such as SMPTE 2110 become important since they help promote operational stability and interoperability. However, three different vendors can interpret a standard differently and this will affect the way they operate together (or not). This is a factor that a broadcaster needs to build into their virtualization strategy and it is one where their technology partner should be able to provide advice.

We note the amount of commentary around the Cloud, IP and virtualization. This is one interesting way to proceed in engineering workflows for a broadcaster but it is not a broadcaster’s cure-all. Software systems running on bare metal on-premise installations still have their advantages, especially in static 24/7 scenarios. It is not an either/or; black/white question. Instead it is a question of balancing workflow flexibility and versatility against operational efficiency. And it will develop over the coming years as virtualization’s enabling technologies evolve. It is a question that needs careful consideration from experienced technologists. We are investing in virtualized architectures, but this does not signpost the immediate end for dedicated bare-metal with SW only workflow offerings – far from it – it merely offers greater choice to the customer and future proof migration scenarios.

There is one last point to consider. The coronavirus pandemic is creating a profound impact on our society and that impact could be long-lasting. Streaming is an ever-increasing reality of our lives and content and service providers are adapting to these demands. Cloud-based media monitoring such as PRISMON.cloud will be a key factor in deciding who wins and who loses by providing a broadcast grade, reliable service to their customer. Monitoring Quality of Service is key to achieving that.

At the same time, within the media technology vendor market, it is a sad fact of life that not all companies will survive the economic downturn that will inevitably come. Unless companies possess the financial resources to provide a financial injection and support their flagging cashflows in coming months, they will be forced to cease operations. In this environment, it is so vitally important for customers to select the right technology partner.

As we are a privately owned company with excellent financial resources at our disposal, we are able to support our cashflow, and continue investing in the research & development of the next generation of monitoring systems – both on-premise and virtualized into the cloud. 

Supported by

You might also like...

Wi-Fi Gets Wider With Wi-Fi 7

The last 56k dialup modem I bought in 1998 cost more than double the price of a 28k modem, and the double bandwidth was worth the extra money. New Wi-Fi 7 devices are similarly premium-priced because early adaptation of leading-edge new technology…

NAB Show 2024 BEIT Sessions Part 2: New Broadcast Technologies

The most tightly focused and fresh technical information for TV engineers at the NAB Show will be analyzed, discussed, and explained during the four days of BEIT sessions. It’s the best opportunity on Earth to learn from and question i…

Chris Brown Discusses The Themes Of The 2024 NAB Show

The Broadcast Bridge sat down with Chris Brown, executive vice president and managing director, NAB Global Connections and Events to discuss this year’s gathering April 13-17 (show floor open April 14-17) and how the industry looks to the show e…

NAB Show 2024 BEIT Sessions Part 1: ATSC 3.0 And TV RF

A full-time chief engineer in good relationships with manufacturer reps and an honest local dealer should spend most of their NAB Show time immersed in BEIT sessions. It’s an incredible opportunity to learn from and personally question indisputable industry e…

Essential Guide: Network Observability

This Essential Guide introduces and explores the concept of Network Observability. For any broadcast engineering team using IP networks and cloud ecosystems for live video production, it is an approach which could help combat a number of the inherent challenges…