Cloud Microservice Workflow Design - Part 1

The power and flexibility of cloud computing is being felt by broadcasters throughout the world. Scaling delivers incredible resource and the levels of resilience available from international public cloud vendors is truly eye watering. It’s difficult to see how any broadcaster would run out of computing power or storage, even with 4K and 8K infrastructures.



This article was first published as part of Essential Guide: Cloud Microservice Workflow Design - download the complete Essential Guide HERE.

Cloud computing wasn’t built specifically for broadcasters, but to provide more generic business workflow solutions for industry in general. Manufacturing, publishing, medical, finance and retail, but to name just a few, are industries that have all benefited from the development and continued investment in cloud infrastructure.

That said, all these industry sectors do have much in common with broadcasting. They all have peaks and troughs in their delivery cycles, and they all find it difficult, if not impossible to predict future trends with absolute certainty. In recent years cloud computing speeds have reached a point where they can now be used in real-time television thus enabling us to ride on the crest of innovation that many other industries have been enjoying for years.

Although broadcasters have built flexibility into their infrastructures by using central routing matrices, tie-lines and patch cords, the limiting factor has always been our reliance on the specialist SDI and AES distribution systems - synchronous transport stream methods that have served the broadcast industry well for decades. Unfortunately, they are static and difficult to scale.

Transitioning to IP has helped many broadcasters transcend the limitations of SDI and AES to provide asynchronous and scalable networks. Furthermore, adopting IP facilitates connectivity to computers through the ethernet port which in turn removes the broadcaster’s reliance on custom interfaces. This has inadvertently provided a gateway to cloud computing.

Treating the public cloud as a gigantic compute and storage resource only scrapes the surface of its capabilities. The real power becomes apparent when we consider the scalability of the cloud and its ability to dynamically respond to the peak needs of the business.

There are few industries in the world that can boast constant demand for their products and services with little variation, and broadcasting is no different. Most, if not all broadcast infrastructures have been designed to meet peak demand. Often, expensive resource sits around doing nothing as the natural cycles of broadcasting means the equipment is not used every minute of every day.

Being able to add and remove computer resource “on demand” delivers new opportunities for broadcasters. And the key to making the most effective use of the cloud is to build systems, from the ground up that exploits this scalability. It might sound counter-intuitive to a seasoned broadcast engineer who’s cut their teeth on thirty years of hardware, that we can simply delete resource and stop paying for it when we no longer need it, but this is exactly what we do with cloud computing.

Broadcasters often transcode media assets, but not all the time, they may only do this during the week for eight hours each day. Using the traditional hardware solutions meant expensive resource spent most of its time doing nothing. With cloud computing, we can spin up new services as required. It might be that during an eight hour shift the operator needs four transcoders instead of one. They can simply spin up three additional services and quadruple their efficiency.

Cloud computing is the next major development for broadcasters. Combined with dynamic workflows and cloud microservices, efficiencies are set to skyrocket and broadcasters will be able to design on a “needs basis” as opposed to peak demand. A welcome result for any CFO and any engineer looking to design for the future.



To truly deliver the efficiencies and reliability COTS infrastructures and cloud deployments promise to offer, we must adopt entirely new design philosophies that cut to the very core of our understanding of how broadcast infrastructures operate.

Simply provisioning software versions of the components that make up broadcast workflows doesn’t even get close to leveraging the power of cloud systems.

Their dynamic ability is much more than just pay-as-you-go computing. If correctly embraced, the philosophy of scalability soon demonstrates how cloud systems are about much more than just saving money.

Few broadcast infrastructures use all their resources 24/7. Even in these days of highly efficient automation, expensive hardware can sit around for hours or even days without being used. Most workflows need human intervention at some point and are constrained by the availability of operators and engineers. Production requirements often exhibit usage patterns based on peak demands, limiting the efficiency of workflow components.

Hardware Inefficiencies
On-prem datacenters go some way to alleviate these challenges. Historically, broadcasters have had to procure specialist hardware that could only be used for one specific purpose. For example, a standards converter would only ever deliver this one task. It’s possible that some sub-functionality of the product could be used, such as its color corrector in the preamp. Still, these were often use-cases that were at the periphery of the operation and added little to the monetary efficiency of the standards converter. COTS servers allowed multiple software applications to be run on them, thus making better use of the capital resource.

One of the challenges broadcasters have faced since the delivery of the first television studios is that they must provision for peak demand. The high bandwidth nature of video distribution and processing has left little scope for making more efficient use of on-prem datacenters. The same challenges apply leading to server, storage and network resources all being designed for the worst-case-scenario, or peak demand.

It is possible to spin up and spin down virtual machines within your own on-prem datacenter. However, design engineers must still provision for peak demand to make sure there is enough hardware resource available. With the speed at which requirements change and scale, this can be a daunting task, especially if the old-school static methodologies are applied.

Responding To Peak Demand
In this context, the case for cloud computing is compelling. Broadcasters do not need to be concerned with peak demand as there is always more resource in the cloud. One of the challenges broadcasters do face is understanding when to provision just enough cloud resources and where to scale. By the very nature of the transient demands modern broadcasting places on resource, getting this demarcation right is not as easy as it may first appear.

Microservice functionality is abstracted away from the user through the API gateway. This greatly reduces the complexity of the broadcaster’s workflow design as they only see a REST API gateway. Furthermore, the broadcaster can allow microservice vendors granular access to parts of their storage, enabling efficient cloud-based media asset processing.

Microservice functionality is abstracted away from the user through the API gateway. This greatly reduces the complexity of the broadcaster’s workflow design as they only see a REST API gateway. Furthermore, the broadcaster can allow microservice vendors granular access to parts of their storage, enabling efficient cloud-based media asset processing.

Agile development methodologies embrace change. Modification isn’t just an add-on, it’s at the very core of their design principles and made all the more powerful by cloud computing. Agile developers are adept at working on remote systems that change by the minute. Scaling systems up and down is key to how they operate, and this is a design principle that broadcasters can greatly benefit from.

This further leads to provisioning microservices in media supply chains. Traditional software relied on large monolithic designs that had to be recompiled in its entirety every time a new feature was added, or a bug fix deployed. For small-scale software this was relatively easy to work with, but as software became more complex, the ability to maintain it became an increasingly difficult challenge.

Software Optimization
Microservices solve two fundamental challenges: they deliver highly maintainable software and facilitate fast deployment streamlined to the hardware they operate on. Although the Von Neuman x86 architecture is ubiquitous in every datacenter throughout the world, it has more variants than we care to think of.

This leads to the potential for suboptimal software as any fine-tuning for a particular vendor’s hardware will unlikely be able to be replicated on another vendor’s hardware.

Operating in the cloud helps resolve this. The microservices vendor not only fine-tunes the software for the platforms they support, but they are able to test the code in a well-defined system that can be reliably replicated. This leads to highly efficient reliable systems.

The systems developers are writing code on are exactly the same cloud infrastructures as those used by the broadcasters when employing the media processing facilities. The broadcaster will select their preferred cloud vendor and from there select the microservice they want to use. Code maintenance is further streamlined as broadcasters do not need to update software themselves as this is automatically provided by the vendor when they update the microservice. Testing and QA (Quality Assurance) is applied by the vendor and quite often the broadcaster may not even be aware a new software version has been deployed.

Although virtualization can meet the demands of dynamic systems, they are relatively slow and resource thirsty when compared to microservices. The key to understanding the efficiency gains is to recognize the locality of the operating system. Multiple microservices have their own container to hold the application specific libraries and software but reside within the same operating system kernel thus making creation and deletion much quicker. Unlike a virtualized machine, the host server doesn’t have to create multiple instances that are resource intensive. Instead, each microservice is a relatively small, compartmentalized container of code and libraries that can be quickly designed, tested, QA’ed and enabled for operation.

Supported by

You might also like...

Wi-Fi Gets Wider With Wi-Fi 7

The last 56k dialup modem I bought in 1998 cost more than double the price of a 28k modem, and the double bandwidth was worth the extra money. New Wi-Fi 7 devices are similarly premium-priced because early adaptation of leading-edge new technology…

NAB Show 2024 BEIT Sessions Part 2: New Broadcast Technologies

The most tightly focused and fresh technical information for TV engineers at the NAB Show will be analyzed, discussed, and explained during the four days of BEIT sessions. It’s the best opportunity on Earth to learn from and question i…

Chris Brown Discusses The Themes Of The 2024 NAB Show

The Broadcast Bridge sat down with Chris Brown, executive vice president and managing director, NAB Global Connections and Events to discuss this year’s gathering April 13-17 (show floor open April 14-17) and how the industry looks to the show e…

NAB Show 2024 BEIT Sessions Part 1: ATSC 3.0 And TV RF

A full-time chief engineer in good relationships with manufacturer reps and an honest local dealer should spend most of their NAB Show time immersed in BEIT sessions. It’s an incredible opportunity to learn from and personally question indisputable industry e…

Audio For Broadcast - The Book

​Audio For Broadcast - The Book gathers together 16 articles into a 78 page eBook which explores the science and practical applications of audio in broadcast.  This book is not aimed at audio A1’s, it is intended as a reference resource for …