Scalable Dynamic Software For Broadcasters: Part 3 - Microservices, Containers And Orchestration

Microservices provide a mechanism to allow broadcast facilities to scale their resource to meet viewer demand. But the true power of microservices is released when we look at containers and orchestration, a management system that empowers scalability, flexibility, and resilience.

Although microservices and containers provide a technical solution to increasing and decreasing functionality to meet viewer demand, they also facilitate continuous delivery. This is the ability to deliver all changes within the software, including bug fixes, function releases, and configuration changes into the production environment in the fastest and safest way possible. And by safest, we mean the least disruption (preferably none) to the viewer.

It's important to remember that in a truly continuous delivery environment, developers do not expect to be restricted on how many times they can deploy their code, and this could be several times a day. Compared to the monolithic software of the past, where a code release was a major exercise, microservices working within the continuous delivery environment have removed the high stress and risk associated with software releases. Instead of treating software releases as an exception and risk to the business, they are now considered to be low risk and part of the daily operation.

Containers

Microservices can operate on their own but in doing so are difficult to deploy and manage. In effect, they just become a collection of loosely coupled small programs that are distributed across the compute resource. Containers group microservice applications together to provide an isolated operating environment that share the same operating system kernel of the host server.

The microservice deployment, within the container, just consists of the installation instructions, dependent libraries, and code, thus negating the need to deploy a full-blown operating system every time a microservice is started or stopped. This provides a lightweight alternative to virtual machines as there is significant overhead in starting and stopping VMs, which is bypassed when using containerized microservices.

Containers also deliver independence, scalability, and lifecycle automation, and can be thought of as a management component within the orchestration system that helps microservices work together and deliver truly scalable and resilient distributed software infrastructures.

Independence allows small teams of developers to work on specific microservices without having to involve large teams. This facilitates agile working so that features can be released more quickly, and testing is more efficient and reliable.

Having the option of operating microservice applications on any platform, whether local or remote, makes a software infrastructure incredibly flexible and provides many options for broadcasters in terms of meeting peak demand but also delivering efficient and cost-effective systems.

Lifecycle automation facilitates continuous delivery pipelines so that individual software components can be added, removed, updated, and maintained as required. This would be almost impossible with a monolithic system as the software functionality cannot be split into individual components.

To recap, microservices provide independent components, so that each component can work without reference to others but at the same time communicate through coupled API interfaces to maintain consistent control and data exchange. Also, components can be developed and tested individually without having to recompile the whole software so functions can be built safely. And the whole system is decentralized so that microservice components can be run from on-prem datacenters as well as public cloud services. Due to the APIs, communication channels and object storage, components do not “care” where they operate from, therefore, a broadcaster can use any combination of on-prem and off-prem hardware resource.

Furthermore, a container can be thought of in a similar light to a physical container. It provides a mechanism to move microservices around datacenters by grouping them. By moving a microservice into a container, we are effectively putting it into its own environment, independent of the underlying hardware it is operating on. Furthermore, as the containers abstract the microservice components from the underlying hardware it allows the container to be moved onto any server, cloud, or virtual machine.

Figure 1 - The cluster forms the highest level of the system containing the nodes. Each node is a physical or virtual machine which contains the pods. The pods are abstractions that contain the individual microservices and allocate the node resources as required.

Figure 1 - The cluster forms the highest level of the system containing the nodes. Each node is a physical or virtual machine which contains the pods. The pods are abstractions that contain the individual microservices and allocate the node resources as required.

Orchestration

The containers do not exist in isolation but need a higher-level management system that distributes, schedules, enables and disables them, and this process is often referred to as orchestration. It’s important to remember that microservices can exist without containers, which can in turn exist independently of orchestration, but it is the combination of all three of these fundamental components working together that provides the power of microservice architectures.

When referring to microservices, we are really embracing the whole orchestration ecosystem. Microservices working in isolation are just small programs, but combined with containers and orchestration, form a hugely scalable architecture that facilitates software deployment and operation over on-prem and off-prem datacenters, as well as the public cloud.

There are many container orchestration systems available including Kubernetes, OpenShift, and Nomad, and all have their own methods of operation but share the concept of deploying and managing containers. 

Hierarchy

At its highest level, an orchestration system consists of a cluster which in itself is an abstraction of the whole orchestration system. The cluster embraces the nodes and control plane, and in the case of a Kubernetes type orchestration system, each of these nodes consists of one or multiple pods, and it is the pods that manage the containers and hence the microservices.

This all might seem a lot of abstraction and overhead, but the system does facilitate and empower full scalability, flexibility, and resilience. The control plane manages the cluster including scheduling the actual applications providing the functionality, scaling applications, and deploying software updates.

The node is either a physical computer or a VM and provides the worker machine for the cluster. This means that the node can exist on a physical machine, a localized VM in a datacenter, or a public cloud, and the orchestration layer through the control plane links the node processes together so that they can be either physically dispersed or decentralized if required.

In the case of Kubernetes, the nodes encapsulate Pods. These are another abstraction that manage one or more containers to allow sharing of resource. As the container is a resource independent abstraction, at some point, the microservice applications it hosts must access the hardware, and this is achieved through Pods.

A Node consists of one or more Pods and allows sharing of the storage, allocates IP addresses either individually or as a cluster, and contains information about how to run each container. This includes which container image version to use, and which specific ports are required.

Resilience

It is possible to run an entire cluster on one single machine or VM, but this would be highly dangerous. Should the server fail, then the whole architecture fails and it would be time-consuming and costly to rebuild. Instead, a minimum deployment would operate over three nodes (virtualized or physical servers) consisting of the one node for the control plane, one node for the system database, and the third node backing up the other two. Therefore, if one VM instance or physical server dies, then one of the others will recover the microservice architecture.

Another aspect of resilience is that we should assume failures will happen. This assumption facilitates strategies for testing as well as recovery. It’s only when recovery from a failure can be achieved that a system is completely resilient. And the compact and contained nature in which microservices operate lends themselves to this methodology. Instead of fearing failures we should really be embracing them and using them to learn how to recover. The old attitudes to A-B failover just don’t cut it in the highly complex world of software infrastructures.

In essence, to build a truly resilient system, microservice architectures should be designed with failure in mind. Only when we know what happens when things don’t go according to plan can we be sure to devise an effective counter-measure. The combination of the natural resilience of microservices and containerized infrastructures makes them truly resilient.

Part of a series supported by

You might also like...

NAB Show 2024 BEIT Sessions Part 2: New Broadcast Technologies

The most tightly focused and fresh technical information for TV engineers at the NAB Show will be analyzed, discussed, and explained during the four days of BEIT sessions. It’s the best opportunity on Earth to learn from and question i…

Chris Brown Discusses The Themes Of The 2024 NAB Show

The Broadcast Bridge sat down with Chris Brown, executive vice president and managing director, NAB Global Connections and Events to discuss this year’s gathering April 13-17 (show floor open April 14-17) and how the industry looks to the show e…

Audio For Broadcast - The Book

​Audio For Broadcast - The Book gathers together 16 articles into a 78 page eBook which explores the science and practical applications of audio in broadcast.  This book is not aimed at audio A1’s, it is intended as a reference resource for …

Standards: Part 4 - Standards For Media Container Files

This article describes the various codecs in common use and their symbiotic relationship to the media container files which are essential when it comes to packaging the resulting content for storage or delivery.

Standards: Appendix E - File Extensions Vs. Container Formats

This list of file container formats and their extensions is not exhaustive but it does describe the important ones whose standards are in everyday use in a broadcasting environment.