No More Flying Blind In The Cloud

Migrating live and file-based video services to the cloud holds the promise of huge flexibility in deployment architectures, geographical backhaul possibilities, virtually limitless peak provisioning, pay per use pricing, access to machine learning and other advanced intellectual property on-demand, and much more. This migration is still in its infancy and will ultimately drive new cloud business models and partnerships to create viable financial common ground, but there are some critical technical challenges facing video service providers looking to de-risk the move to the cloud.

Most content and service providers have operational teams that understand video very well and have years of experience working with on-premise video architectures. Many of these teams have Network Operation Centers (NOCs) that allow them to monitor the video as it traverses their video networks. The video is typically inspected at each demarcation point between the pieces of equipment to provide the transport (Quality of Service – QoS) and video/audio content (Quality of Experience – QoE) visibility so the operators know the video is good before it goes into the delivery pipes to the consumers and is then tracked across the delivery network.

For many content and service providers, moving their video services or video distribution to the cloud is a daunting prospect due to losing visibility for their operations teams. Many are used to “walled garden” architectures and do not understand the complexity of the cloud, and the thought of sending precious streams off into the unknown with no knowledge of whether they arrived intact is just too big a leap to make. The latest wave of reliable transport technologies (SRT, Zixi, Aspera) can help to carry the content to the relevant cloud data center for processing, but it still needs to be checked before and after the video processing pipeline. Otherwise, if the viewing experience is bad, how do you even begin to diagnose the issue? Figuring out where something has gone wrong without integrated cloud monitoring is like trying to find the needle in the proverbial haystack.

Plotting a safe course in your migration to the cloud

When migrating video services to the cloud, effective monitoring of all video streams is absolutely essential to understand what is happening at each stage of the delivery process. Video providers need to understand and plan for this before they start their migration to cloud based video services, whether 24/7 channels or scheduled/on-demand monitoring for live events. For live events, the actual channel and associated monitoring may only be orchestrated and deployed for the duration of the event (with suitable buffers before and after for pre-testing and post-event data analysis), so ensuring operational validation of the streams as they transition to and across the cloud is critical to isolating issues should anything go wrong.

As a core principle, the workflow’s monitoring needs to be designed in as early as possible. QoE/QoS is just as essential as ever, but there are different types of protocol monitoring required at various stages in the chain, such as the headend as compared with later throughout the content distribution network (CDN). Furthermore, the system might be used for more than just quality monitoring: tracking SCTE marker insertion for advertising and checking correct splicing for Live-VoD and VoD-Live transitions are all possible cross-over applications that may need to be checked in at an early stage. Lastly, and many people overlook this, there is a need for a real-time feedback loop for dynamic behavior within the delivery system. As we start to take advantage of cloud flexibility for auto-scaling and self-verifying/healing architectures, the real-time monitoring becomes an absolutely critical part of the feedback mechanism to make the right dynamic decisions.

Cloud contribution of live video is becoming a hot topic at the moment. Effective and versatile monitoring is very important - especially for contribution. Live video needs smooth delivery and traditional TCP-based delivery can be unstable. As low latency becomes more important, the jitter will become more critical again, just as it did for multicast. The more real-time video needs to be, the smoother the delivery needs to be. Any inconsistencies may affect the whole chain. This also drives the need for more real time monitoring due to tighter tolerances in delivery between the elements in the delivery architecture.

Hybrid approaches provide a bridge to the cloud

On-premise video infrastructures aren’t going to disappear overnight and will continue to play an important role for the foreseeable future. The $64 million question is how can the industry help to provide hybrid solutions that support and evolve existing solutions while enabling the transition of video workflows to the cloud?

Companies making the transition can’t handle too many changes at once, so transitioning ‘familiarity’ is key. At Telestream, we are investing in enabling a smooth transition with the live and VoD monitoring systems, file-based QC and our Vantage Media Processing Platform through Vantage Cloud Port. This hybrid approach means that workflows and transcoding functions can be deployed in the cloud as necessary, and in a seamless way, using the tools and APIs people are familiar with today. For monitoring, it is essential for operations teams that familiar KPIs are available in the cloud as they are on premise today, albeit supplemented with additional relevant KPIs around the additional transport protocols such as SRT and Zixi. As these protocols extend video transport globally across the major cloud provider backbones with the help of distribution frameworks like Haivision Hub, Zixi’s ZenMaster and AWS MediaConnect, monitoring of the video on a global basis across these networks will become as essential and familiar as it is today in traditional architectures.

Currently we live in times of unprecedented change, and transitions are hard to make when the on-premise to cloud transition comes on the back of many industry changes around 4K/8K, HDR, low latency, and use of secure reliable delivery protocols and architectures, such as SRT and Zixi.

At the same time, the video industry is migrating to more evolved underlying architectures leveraging micro services and more on-demand API usage, so familiarity to the extent possible will be a big help while companies catch up and retrain to embrace new architectures and methodologies.

You might also like...

Standards: Part 6 - About The ISO 14496 – MPEG-4 Standard

This article describes the various parts of the MPEG-4 standard and discusses how it is much more than a video codec. MPEG-4 describes a sophisticated interactive multimedia platform for deployment on digital TV and the Internet.

Chris Brown Discusses The Themes Of The 2024 NAB Show

The Broadcast Bridge sat down with Chris Brown, executive vice president and managing director, NAB Global Connections and Events to discuss this year’s gathering April 13-17 (show floor open April 14-17) and how the industry looks to the show e…

Audio For Broadcast - The Book

​Audio For Broadcast - The Book gathers together 16 articles into a 78 page eBook which explores the science and practical applications of audio in broadcast.  This book is not aimed at audio A1’s, it is intended as a reference resource for …

Project Managing The Creative Elements Of Live Sports Production

Huw Bevan is an Executive Producer, Consultant and Head of Cricket for Sunset+Vine, in London, one of the UK’s leading independent sports production companies that produces a full slate of rugby, soccer and cricket events each year. This…

Standards: Part 4 - Standards For Media Container Files

This article describes the various codecs in common use and their symbiotic relationship to the media container files which are essential when it comes to packaging the resulting content for storage or delivery.