Empowering Cloud Through Microservices - Part 2
Achieving higher levels of scalability and flexibility demands a hybrid approach were microservices run in the cloud, on- and off-prem. Also, common APIs hide the complexity of the infrastructure from the user to provide a seamless operation.
Other articles from this series.
Servers, switches, and storage devices all need to be connected, but the dynamic nature of IP networks means that we don’t have to constantly reconnect devices through circuit switched topologies. Instead, packet switched IP networks when combined with SDNs (Software Defined Networks) provide more flexibility and resilience than could ever be achieved with traditional broadcast systems by centralizing the control and management of network endpoints. When partnered with public cloud services this further reduces the need to design for peak demand.
To free ourselves from peak demand design mindsets we must have scalable infrastructures. Building a datacenter does not necessarily free us from this restriction. It’s true that we may be able to take advantage of the statistical nature of the infrastructure needs as it’s unlikely that every studio will be running at the same time. However, broadcasters are expected to, and often mandated to, provide the best possible coverage for major public information events such as the election of a country’s leader. In such a case, the peak demand becomes a major issue.
Overcoming this limitation is achievable through a combination of providing scalable resource through public cloud services with on-prem data centers. The beauty of microservice type systems is that we are effectively abstracting the functionality away from the underlying hardware. From the perspective of the software, it doesn’t matter whether the microservice is running in an on-prem data center or a public cloud as the platforms for both are similar and quite often the same. This is where the power of microservices manifests itself. A broadcaster can still design their on-prem data center for the average demand of their facility but plan their design so that additional scale and functionality can be provided in the public cloud on demand, such as in the case of a major public information event.
Public cloud services are delivering untold opportunities and possibilities for broadcasters, but they do not lend themselves well to static designs. Where they excel in terms of technology and cost is when being scaled in dynamic systems. However, some broadcasters do not always need this kind of flexibility as their average infrastructure requirement may not deviate too much from the average for most of the time. In these circumstances, having an on-prem datacenter is the best solution for their day-to-day needs. But when the major event happens, then they need to be able to scale quickly and effectively, and this is where the public cloud excels.
For broadcasters that have highly dynamic requirements, they would benefit from a purely cloud infrastructure. The great news is, as microservices can run equally well in on-prem datacenters as in the public cloud, the whole infrastructure becomes very scalable, certainly well within the requirements of many, if not all broadcasters.
Figure 1 – microservices can scale vertically and horizontally, or both simultaneously. That is, resources can be increased and allocated vertically, and simultaneously the number of services can be increased horizontally. The microservices are not tied to specific computer hardware and can scale across multiple devices.
When we speak of the public cloud it’s worth remembering that the datacenters do physically exist. Admittedly they’re often shrouded in an element of secrecy to maintain their security, but they are physical entities. This is important when planning the infrastructure as latency can be impacted by physical location, especially when considering human interface controls such as sound consoles and production switchers. Another reason for microservices that can run on hybrid infrastructures is to manage these timing issues.
As microservices are abstracted away from the hardware and use common interface control and data exchange, they can be moved between different datacenters. This provides a massive amount of customization as the microservices, with their container management systems, facilitate localization.
HTTPS And JSON
In part, microservices are flexible due to the control interface and data exchange methods they have adopted. They use the same technology that internet servers and web pages use, that is, HTTPS/TCP/IP and JSON data structures. Not only does this allow microservices to run on any cloud infrastructure, but it also opens the design to every software library and tool available used for web page development, further expanding the opportunities for broadcasters in terms of available and proven technology.
HTTPS (Hyper Text Transfer Protocol Secure) is used by web pages to exchange data with web servers. This protocol encourages a stateless method of operation and by doing so, the software application focuses on processing a chunk of data and returning the results to the requester. Stateless transactions don’t need to reference earlier transactions to accomplish their task. Microservices are stateless further allowing them to take advantage of the sender-receiver model. That is, a software app sends an instruction to the microservice, it then processes the data and sends it back. This is the same method a web browser uses when it requests a web page from the server. But key to this is how the data is exchanged and represented.
This opens outstanding flexibility as the microservice is not only abstracted away from the hardware but is further abstracted from the physical storage locations. Assuming object storage is used then the reference to the media file is a globally unique identifier (GUID), which means the microservice isn’t restricted to a single server or domain.
Figure 2 – The user initiates a job at (1) and sends an HTTPS request through the API with an attached JSON data file containing the media file GUID. (2) the scheduler decides which ProcAmp microservice to send the request to (there could be many depending on the workload). (3) The ProcAmp microservice accesses the media object storage using the GUID. (4) the object storage also provides the storage for the processed media file. When complete, the microservice sends a message to the scheduler which in turn responds to the user through the API referencing the processed media files GUID. The high-datarate media file is kept within the location of the high-speed object storage to improve efficiency and does not need to be streamed to the user.
For example, if a system needs the luminance gain increased in a media file, it will send an HTTPS message with a JSON file to the proc-amp microservice, the JSON file will contain GUIDs for the source and destination media file as well as the control parameters such as “increase luma gain by 3dB”, when the microservice has completed the operation, it will send a message back to the controller. And this is where the stateless part of the system comes into play, the controller issuing the proc-amp message does not know physically where the proc-amp microservice is, it can be on-prem or in a public cloud, and the proc-amp load balancer keeps a check of the number of proc-amp microservices it has available to it and then sends the message to the free proc-amp microservice.
Interface Simplicity For Own Build
Although the underlying process may sound complicated, and it is, the good news is that all this is taken care of by the system managing software, the containers that coordinate the microservices and their loads, and the microservices themselves. From the perspective of the user, the workflow is a series of HTTPS messages and JSON data files.
JSON files are incredibly powerful as they are human readable, so they can be edited with a text editor, and facilitate massive control over the microservice. It’s possible for an engineer or technologist to build their own workflows as the JSON structures are well defined by the microservice vendor and therefore easy to operate and daisy chain together.
Another interesting aspect of this philosophy is that of logging. There are two main reasons why we want to log microservices, first to understand how resource is being used and when, and second to provide data to facilitate forensic analysis should anything go wrong.
Optimizing workflows is incredibly important for scalable systems, especially when using pay-as-you-go costing models. Knowing when and why microservice allocation scales both up and down helps broadcasters make best use of their infrastructures.
Although datacenters and public cloud services are incredibly reliable, things do occasionally go wrong and having a forensic audit trail helps understand how to avoid issues in the future. Humans also make mistakes and understanding how and why this occurs helps prevent incidents in the future.
One of the challenges we have with logging is knowing what to log and when. If every message is logged for every microservice then the database will soon fill up, so a compromise must be found.
Microservices are empowering broadcasters to think differently about their workflows and deliver scalable, flexible, and resilient infrastructures. The complexity of their operation is abstracted from users allowing engineers to not only design systems but build their own adaptions to them when needed. When configured with the right mindset, microservices, on-prem datacenters and public cloud services can work harmoniously together to scale as needed to deliver truly flexible, scalable, and resilient workflows.
You might also like...
OTT’s Unique Storage Requirements
Central storage systems for OTT are unique in the world of media storage. They combine a set of requirements which no other media use case must handle.
Waves: Part 9 - Propagation Inversion
As a child I came under two powerful influences. The first of these was electricity. My father was the chief electrician at a chemical works and he would bring home any broken or redundant electrical parts for me to tinker…
The Sponsors Perspective: What Are You Waiting For?
Cloud based workflows are here to stay. So, there are no longer any technical or production-based reasons not to take full advantage of the opportunities this can bring to broadcasters.
Understanding IP Broadcast Production Networks: Part 1 - Basic Principles Of IP
This is the first of a series of 14 articles that re-visit the basic building blocks of knowledge required to understand how IP networks actually work in the context of broadcast production systems.
OTT Content Origination
Content Origination is in the midst of significant transformation, like all parts of the OTT video ecosystem. As OTT grows and new efficiencies are pursued, Origination must play its part as a fundamental element of the delivery chain. But Origination…