Cloud Native - It’s Not All Or Nothing

Cloud native processing has become a real opportunity for broadcasters in recent years as the latencies, processing speeds, and storage capacities have not only met broadcast requirements, but have even surpassed them.

Start-ups without the baggage of legacy systems, workflows, or archives have the opportunity to completely rethink their workflows and build truly cloud native systems. However, the vast majority of broadcasters considering migration to the cloud will have tens of years’ worth of legacy systems, workflows, and archive material to think about. For them, true cloud native adoption is impossible. But they do have the option of building hybrid type models where they combine existing on-prem workflows with cloud native systems.

The power of native cloud relies upon separating software processes from the underlying hardware through a method of software abstraction. Not only does this provide more flexibility, but it also reduces the risk of having to rely on one specific public cloud vendor.

Central to abstraction are APIs as they provide a generic wrapper for the underlying systems that are processing the video, audio, and metadata. Using facilities such as micro services and containers, system designers abstract away the low-level functionality through the lens of the API so they don’t need to get bogged down in the detail of how the process is applied.

The broadcast industry has matured to an extent where we can assume a proc-amp, standards converter, or even production switcher will just work. Therefore, we don’t need to spend hundreds of hours re-inventing the wheel. Instead of re-designing a proc-amp, why not just use one of the library instances available from a multitude of vendors? Forward thinking vendors will already be providing pay-as-you-go models for their applications. Some are even implementing a try-before-you-buy model to allow system designers the opportunity of testing the APIs and the application in their workflows before they commit to the design.

API abstraction also helps with future proofing a design as the point of demarcation is well defined. In software terms, swapping out a standards conversion from vendor A to vendor B is a relatively straight forward task. Admittedly this does rely on the broadcaster’s system designers providing abstract interfaces within their software design so that workflow dependencies can be easily established.

Speed is another area where cloud native solutions help broadcasters. It’s entirely possible to create a proof of concept in a matter of hours, not weeks or months. With traditional broadcast systems, the hardware procurement and installation were always the blocker, but with datacenters already in place, creating the workflows using known software and libraries becomes much easier.

Broadcasters with existing workflows need to consider how they interface to their current hardware. Although the cloud native textbook tells us to just throw away existing workflows and find more efficient methods of working, this is often just not practical. If an on-prem hardware solution exists that cannot be replicated in the cloud, then a server will need to be installed alongside it to act as a proxy controller. And the signals are probably using SDI or AES, so these will need to be converted into a file or stream at some point before being sent to the cloud.

These challenges might seem like a massive concern, but maintaining an open mind often leads to the realization that they are not the world’s greatest problem and solutions can be found.

All this leads to the hybrid model. That is, just because we can move to cloud native, doesn’t mean we have to. In twenty years, most broadcasters may well have moved to a cloud native model, but in the meantime, we must work with the hybrid approach.

That said, one of the challenges we face is to be careful that we don’t merely replicate existing workflows in the cloud, so they just become a copy of the existing on-prem design. To do this completely misses what it means to use public cloud computing. The whole point is that we’re trying to achieve scalability through software abstraction.

One example of workflows that can be optimized and scaled in the cloud are batch processes. Standards conversion, video color and level processing, and audio loudness adjustments are examples of these. It’s very enticing to just move a file from A to B, process it and then move it to C. But is this the most optimal way of working? Does the process need high CPU, high Disk or IO access? Knowing this will allow the system designer to choose the appropriate resource for the task in hand.

This leads onto the concept of Agile development. Dev Ops engineers look at the world in a different way to the traditional broadcast engineer. As they can build systems quickly, they adopt thought processes that encompass the capacity for rapid change. They design and build systems that can adapt quickly to the changing business demands. And silo working practices are frowned upon, to the extent where collaboration is assumed and expected. Hence the reason open source is so popular in the Dev Ops community.

Native cloud may be the utopian dream, but the harsh reality is that most broadcasters have so many legacy systems with on-prem hardware dependencies that it is almost impossible to move directly to the cloud in one leap. Instead, a hybrid approach is adopted. But employing software abstraction to bring scalability must be at the core of any cloud integration.

You might also like...

Cloud-Native Audio Mixers - Current Developments In Virtualized Broadcast Audio Mixing

As the wider broadcast industry picks up the pace with virtualized, cloud-native production systems we take a look at what audio vendors currently have available and what may be on the horizon.

Designing Media Supply Chains: Part 3 - Content Packaging, Dynamic Ad Insertion And Personalization

The venerable field of audio/visual (AV) packaging is undergoing a renaissance in the streaming age, driven by convergence between broadcast and broadband, demand for greater flexibility, and delivery in multiple versions over wider geographical areas requiring different languages and…

The Importance Of CDN Selection To Achieve Broadcast-Grade Streaming Experiences - Part 1

Multi-CDN is a standard model for today’s D2C Streamers, which automatically requires a CDN selection solution. As streaming aims to be broadcast-grade and cost-effective, how are CDN Selection solutions evolving to support these objectives?

Migrating To The Cloud Takes Careful Planning For Work-From-Home Workflows

It was late in 2018 when a major public broadcaster in the UK came to London-based 7FiveFive, a technology solutions provider, with a growth challenge. Their postproduction department had about 75 edit positions throughout the building working off a shared storage SAN…

Machine Learning For Broadcasters: Part 3 - Neural Networks And The Human Brain

Machine learning is often compared to the human brain. But what do they really have in common?