Empowering Cloud Through Microservices - Part 1

Simply moving workflows and software applications to virtualized infrastructures – whether public or on-prem – will not leverage the power of cloud. Monolithic software programs and static workflows all conspire against the broadcaster when they are reaching to achieve the flexibility, scalability, and resilience that cloud systems promise.

This article was first published as part of Essential Guide: Empowering Cloud Through Microservices - download the complete Essential Guide HERE.

Broadcast infrastructures demand high availability and resilience, and scalability has always been an aspiration, but the peak nature of program productions has rendered this virtually impossible in traditional broadcast infrastructures.

If you build your infrastructure to handle the largest events and the busiest days with full redundancy, it sits idle most of the time. Cloud and virtualization not only deliver high availability and resilience but also massive flexibility and scalability. However, this is only possible if the infrastructure is built with cloud and virtualization in mind from the beginning.

Microservices both present a new method of designing and building software-based infrastructures and encourage a new way of thinking. A microservice isn’t necessarily owned, but instead leased for the duration of the program or service it serves. This reflects a major change in how we think about making programs.

Instead of having to procure and find funds for capital expenditure that justifies the spend for years to come, we now have the opportunity to build entire broadcast infrastructures using pay-as-you-go methodologies.

Improving Reliability

Software is traditionally built using a monolithic design which means that a huge software release is provided for a single application or workflow. It is almost impossible to completely test the software prior to release due to the millions of combinations of inputs and outputs that are available. This can result in each release introducing bugs that have unintended consequences and unpredictable outcomes. Software engineers have tried to improve on this situation by introducing functional libraries and object-oriented code. The idea being that code could be reused, and if it had been in service for some time then it was assumed to be bug-free.

Embedded systems such as proc-amps and standards converters have used monolithic code for some time. Reliability is easier to achieve as the input and output data is better understood and easier to replicate due to working in a closed environment. Furthermore, vendors working in closed environments are often writing software for custom hardware, so they have much better control of how the product behaves.

The challenge we now face with modern broadcast workflows is that they operate on open architectures, that is, we use COTS hardware with their associated operating systems. This has both decreased the cost of the capital expenditure and greatly improved flexibility through the application of software functionality, but in doing so, has massively increased the potential for complexity. Furthermore, monolithic designs are not only difficult to maintain and upgrade, they do not lend themselves well to scalability. One reason for this is that monolithic architectures cannot easily duplicate themselves and coordinate user requests to multiple instances of the same application. COTS hardware has limited processing and as requirements increase, additional hardware needs to be purchased, integrated, and configured. This is a process that can take weeks or even months. Simply running the software on a remote server only moves the issue outside. A monolithic piece of software still needs to be installed, integrated, and configured regardless of whose server it is running on.

Microservices both solve the challenges of monolithic software and build on the advantages of COTS type infrastructures. One of the reasons COTS is so powerful is that hardware is much more readily available than it is with traditional closed broadcast systems.

It is worth remembering that the type of customized infrastructures broadcasters need is at the top end of the technology scale, in other words, it’s relatively expensive. But the high-end servers, switches, and storage broadcasters require are the same type of technology that other industries such as finance and medical are also using, so it is much easier to procure and support.

We can also ride on the crest of the wave of innovation that these other industries provide and microservices are just one result of these advances.

Fig 1 – Microservices are stateless allowing the API Gateway to schedule jobs within the workflow as requested. The user doesn’t know, or need to know where the microservice physically resides, only that it is a service available to them.

Fig 1 – Microservices are stateless allowing the API Gateway to schedule jobs within the workflow as requested. The user doesn’t know, or need to know where the microservice physically resides, only that it is a service available to them.

Less Is More

One of the original design philosophies of UNIX architects Ken Thompson and Dennis Ritchie was to keep the code reusable and modular. Consequently, UNIX has a host of commands that allow the output of one program to be piped into the input of another program. By keeping the operating system programs relatively small, they became much easier to maintain and support.

Microservices are following this similar proven design philosophy. By keeping functionality well defined with contained input and output data values, they are much easier to maintain and support. Other benefits also include improved security and scalability.

In the same way that a house consists of thousands of bricks, all working together to make a huge building many times the size of one brick, microservices combine to deliver highly flexible, scalable, and resilient workflows that are much greater than the sum of the parts.

Key to understanding the microservice workflow is to appreciate the philosophy of building systems consisting of smaller parts. Unlike a brick house, we can pull the whole microservice workflow apart, delete it when it’s not needed, and then reconstruct it again in a matter of minutes through the appropriate management software. Similar to the UNIX example above, microservices can be tested individually and can be easily installed, upgraded, or rolled back without impacting other microservices. This means that a small bug in one microservice will not bring down the whole system, improving the reliability of the system even when going through regular updates.

Broadcast facilities have worked with the modular mindset since the first television broadcasts, but these have always been fairly static. We have been able to design some flexibility and scalability into the infrastructures through assignable matrices and pluggable jackfields, but the reality is that we’ve been saddled with having to design for peak demand. No matter how flexible we try and make an infrastructure, the static nature of single functionality equipment has been a limiting factor. However, COTS infrastructures combined with manageable microservices are delivering untold levels of flexibility and scalability.

Supported by

You might also like...

Why AI Won’t Roll Out In Broadcasting As Quickly As You’d Think

We’ve all witnessed its phenomenal growth recently. The question is: how do we manage the process of adopting and adjusting to AI in the broadcasting industry? This article is more about our approach than specific examples of AI integration;…

Designing IP Broadcast Systems: Integrating Cloud Infrastructure

Connecting on-prem broadcast infrastructures to the public cloud leads to a hybrid system which requires reliable secure high value media exchange and delivery.

Video Quality: Part 1 - Video Quality Faces New Challenges In Generative AI Era

In this first in a new series about Video Quality, we look at how the continuing proliferation of User Generated Content has brought new challenges for video quality assurance, with AI in turn helping address some of them. But new…

Minimizing OTT Churn Rates Through Viewer Engagement

A D2C streaming service requires an understanding of satisfaction with the service – the quality of it, the ease of use, the style of use – which requires the right technology and a focused information-gathering approach.

Production Control Room Tools At NAB 2024

As we approach the 2024 NAB Show we discuss the increasing demands placed on production control rooms and their crew, and the technologies coming to market in this key area of live broadcast production.