Cloud Microservice Workflow Design - Part 1

The power and flexibility of cloud computing is being felt by broadcasters throughout the world. Scaling delivers incredible resource and the levels of resilience available from international public cloud vendors is truly eye watering. It’s difficult to see how any broadcaster would run out of computing power or storage, even with 4K and 8K infrastructures.



This article was first published as part of Essential Guide: Cloud Microservice Workflow Design - download the complete Essential Guide HERE.

Cloud computing wasn’t built specifically for broadcasters, but to provide more generic business workflow solutions for industry in general. Manufacturing, publishing, medical, finance and retail, but to name just a few, are industries that have all benefited from the development and continued investment in cloud infrastructure.

That said, all these industry sectors do have much in common with broadcasting. They all have peaks and troughs in their delivery cycles, and they all find it difficult, if not impossible to predict future trends with absolute certainty. In recent years cloud computing speeds have reached a point where they can now be used in real-time television thus enabling us to ride on the crest of innovation that many other industries have been enjoying for years.

Although broadcasters have built flexibility into their infrastructures by using central routing matrices, tie-lines and patch cords, the limiting factor has always been our reliance on the specialist SDI and AES distribution systems - synchronous transport stream methods that have served the broadcast industry well for decades. Unfortunately, they are static and difficult to scale.

Transitioning to IP has helped many broadcasters transcend the limitations of SDI and AES to provide asynchronous and scalable networks. Furthermore, adopting IP facilitates connectivity to computers through the ethernet port which in turn removes the broadcaster’s reliance on custom interfaces. This has inadvertently provided a gateway to cloud computing.

Treating the public cloud as a gigantic compute and storage resource only scrapes the surface of its capabilities. The real power becomes apparent when we consider the scalability of the cloud and its ability to dynamically respond to the peak needs of the business.

There are few industries in the world that can boast constant demand for their products and services with little variation, and broadcasting is no different. Most, if not all broadcast infrastructures have been designed to meet peak demand. Often, expensive resource sits around doing nothing as the natural cycles of broadcasting means the equipment is not used every minute of every day.

Being able to add and remove computer resource “on demand” delivers new opportunities for broadcasters. And the key to making the most effective use of the cloud is to build systems, from the ground up that exploits this scalability. It might sound counter-intuitive to a seasoned broadcast engineer who’s cut their teeth on thirty years of hardware, that we can simply delete resource and stop paying for it when we no longer need it, but this is exactly what we do with cloud computing.

Broadcasters often transcode media assets, but not all the time, they may only do this during the week for eight hours each day. Using the traditional hardware solutions meant expensive resource spent most of its time doing nothing. With cloud computing, we can spin up new services as required. It might be that during an eight hour shift the operator needs four transcoders instead of one. They can simply spin up three additional services and quadruple their efficiency.

Cloud computing is the next major development for broadcasters. Combined with dynamic workflows and cloud microservices, efficiencies are set to skyrocket and broadcasters will be able to design on a “needs basis” as opposed to peak demand. A welcome result for any CFO and any engineer looking to design for the future.



To truly deliver the efficiencies and reliability COTS infrastructures and cloud deployments promise to offer, we must adopt entirely new design philosophies that cut to the very core of our understanding of how broadcast infrastructures operate.

Simply provisioning software versions of the components that make up broadcast workflows doesn’t even get close to leveraging the power of cloud systems.

Their dynamic ability is much more than just pay-as-you-go computing. If correctly embraced, the philosophy of scalability soon demonstrates how cloud systems are about much more than just saving money.

Few broadcast infrastructures use all their resources 24/7. Even in these days of highly efficient automation, expensive hardware can sit around for hours or even days without being used. Most workflows need human intervention at some point and are constrained by the availability of operators and engineers. Production requirements often exhibit usage patterns based on peak demands, limiting the efficiency of workflow components.

Hardware Inefficiencies
On-prem datacenters go some way to alleviate these challenges. Historically, broadcasters have had to procure specialist hardware that could only be used for one specific purpose. For example, a standards converter would only ever deliver this one task. It’s possible that some sub-functionality of the product could be used, such as its color corrector in the preamp. Still, these were often use-cases that were at the periphery of the operation and added little to the monetary efficiency of the standards converter. COTS servers allowed multiple software applications to be run on them, thus making better use of the capital resource.

One of the challenges broadcasters have faced since the delivery of the first television studios is that they must provision for peak demand. The high bandwidth nature of video distribution and processing has left little scope for making more efficient use of on-prem datacenters. The same challenges apply leading to server, storage and network resources all being designed for the worst-case-scenario, or peak demand.

It is possible to spin up and spin down virtual machines within your own on-prem datacenter. However, design engineers must still provision for peak demand to make sure there is enough hardware resource available. With the speed at which requirements change and scale, this can be a daunting task, especially if the old-school static methodologies are applied.

Responding To Peak Demand
In this context, the case for cloud computing is compelling. Broadcasters do not need to be concerned with peak demand as there is always more resource in the cloud. One of the challenges broadcasters do face is understanding when to provision just enough cloud resources and where to scale. By the very nature of the transient demands modern broadcasting places on resource, getting this demarcation right is not as easy as it may first appear.

Microservice functionality is abstracted away from the user through the API gateway. This greatly reduces the complexity of the broadcaster’s workflow design as they only see a REST API gateway. Furthermore, the broadcaster can allow microservice vendors granular access to parts of their storage, enabling efficient cloud-based media asset processing.

Microservice functionality is abstracted away from the user through the API gateway. This greatly reduces the complexity of the broadcaster’s workflow design as they only see a REST API gateway. Furthermore, the broadcaster can allow microservice vendors granular access to parts of their storage, enabling efficient cloud-based media asset processing.

Agile development methodologies embrace change. Modification isn’t just an add-on, it’s at the very core of their design principles and made all the more powerful by cloud computing. Agile developers are adept at working on remote systems that change by the minute. Scaling systems up and down is key to how they operate, and this is a design principle that broadcasters can greatly benefit from.

This further leads to provisioning microservices in media supply chains. Traditional software relied on large monolithic designs that had to be recompiled in its entirety every time a new feature was added, or a bug fix deployed. For small-scale software this was relatively easy to work with, but as software became more complex, the ability to maintain it became an increasingly difficult challenge.

Software Optimization
Microservices solve two fundamental challenges: they deliver highly maintainable software and facilitate fast deployment streamlined to the hardware they operate on. Although the Von Neuman x86 architecture is ubiquitous in every datacenter throughout the world, it has more variants than we care to think of.

This leads to the potential for suboptimal software as any fine-tuning for a particular vendor’s hardware will unlikely be able to be replicated on another vendor’s hardware.

Operating in the cloud helps resolve this. The microservices vendor not only fine-tunes the software for the platforms they support, but they are able to test the code in a well-defined system that can be reliably replicated. This leads to highly efficient reliable systems.

The systems developers are writing code on are exactly the same cloud infrastructures as those used by the broadcasters when employing the media processing facilities. The broadcaster will select their preferred cloud vendor and from there select the microservice they want to use. Code maintenance is further streamlined as broadcasters do not need to update software themselves as this is automatically provided by the vendor when they update the microservice. Testing and QA (Quality Assurance) is applied by the vendor and quite often the broadcaster may not even be aware a new software version has been deployed.

Although virtualization can meet the demands of dynamic systems, they are relatively slow and resource thirsty when compared to microservices. The key to understanding the efficiency gains is to recognize the locality of the operating system. Multiple microservices have their own container to hold the application specific libraries and software but reside within the same operating system kernel thus making creation and deletion much quicker. Unlike a virtualized machine, the host server doesn’t have to create multiple instances that are resource intensive. Instead, each microservice is a relatively small, compartmentalized container of code and libraries that can be quickly designed, tested, QA’ed and enabled for operation.

Supported by

You might also like...

NDI For Broadcast: Part 1 – What Is NDI?

This is the first of a series of three articles which examine and discuss NDI and its place in broadcast infrastructure.

Brazil Adopts ATSC 3.0 For NextGen TV Physical Layer

The decision by Brazil’s SBTVD Forum to recommend ATSC 3.0 as the physical layer of its TV 3.0 standard after field testing is a particular blow to Japan’s ISDB-T, because that was the incumbent digital terrestrial platform in the country. C…

Designing IP Broadcast Systems: System Monitoring

Monitoring is at the core of any broadcast facility, but as IP continues to play a more important role, the need to progress beyond video and audio signal monitoring is becoming increasingly important.

Broadcasting Innovations At Paris 2024 Olympic Games

France Télévisions was the standout video service performer at the 2024 Paris Summer Olympics, with a collection of technical deployments that secured the EBU’s Excellence in Media Award for innovations enabled by application of cloud-based IP production.

Standards: Part 18 - High Efficiency And Other Advanced Audio Codecs

Our series on Standards moves on to discussion of advancements in AAC coding, alternative coders for special case scenarios, and their management within a consistent framework.