Scalable Dynamic Software For Broadcasters: Part 2 - Microservices Explained

Monolithic code has been the traditional method of writing and distributing software for many years. As systems become more complex it proves difficult to scale, maintain, and support. The solution to overcoming these is provided by microservices, but to understand why they are so important, we must review monolithic software to show its limitations.

The definition of monolithic software is an application that is stand-alone, encapsulates the user interface, data processing, and manages data storage allocation. There are many examples of monolithic software, from word processors to finance applications, many of which process workflows in their entirety. And this is where the challenges appear.

Designing for peak demand isn’t just a challenge for broadcasters, many other industries suffer from this too. For example, customers accessing their bank accounts tend to work within the waking day and few need access overnight. However, banks have traditionally had to design their software for the peak day demand, and they achieved this by building bigger and bigger monolithic software applications. It’s fair to say this practice has either stopped or is stopping, as banks are also transferring to microservice type systems which scale to meet demand.

If we consider a monolithic software application in terms of a word processor running on a stand-alone PC, then monolithic designs are not too much of a challenge. Yes, the software probably sits doing nothing outside of the working day, but for most businesses this isn’t a major issue. However, if we think in terms of business logic and workflows, then life becomes much more interesting.

In the example of a bank, new features are constantly being added to improve the client experience. This not only results in more code functionality, but also has an impact on the workflow and business logic, which in turn greatly affects the software.

Scalability

Scaling monolithic software is a major challenge because the functionality is difficult to split. If we take a transcoding service as an example where one transcoder runs on one server, it’s relatively easy to manually operate the service. A user would drag the media file into the transcoder’s input directory, the transcoding application would be scanning the directory for new files and when it detected one would then process it and add the transcoded file to the output directory.

If ten files were added to the input queue, the transcoder app would process each media file in sequence. To speed up the process, we would need to increase the number of apps running on the server. This is fine and we may be able to add another two or three instances of the transcoding app to give a processing capacity of four simultaneous media files. If forty media files need processing simultaneously then we would be back to square one, that is, each transcoder would have to process ten files sequentially, meaning our processing time is greatly increased for all the files. It might also be that a file needs to be processed multiple times for each format, especially when we start providing OTT streams.

For most of the time the broadcast facility may only need to process three or four media files at a time. However, broadcasting is incredibly peaky as viewers tend to be habitual in their viewing needs, for example, Saturday night viewing usually exhibits much higher demand than at any other time. And in this example, the broadcaster would need to provide forty or more transcoded files at a time. One transcode application per output file would require forty or more transcoding apps.

Although this is technically possible, when Sunday arrives and only two transcoding apps are required, we suddenly have thirty or more transcoding apps, with their associated hardware, all sat around doing nothing except chewing through valuable capex. Although the transcoding apps may be individual applications, as an integrated workflow they represent a monolithic software system as the broadcaster must design for the peak demand of a Saturday night. Furthermore, the transcoding apps will be locked to specific servers due to licensing requirements and the servers’ resource restrictions. And if we have multiple servers then we need a method of scheduling the transcoding jobs, keeping track of them, and providing the correct transcoding files.

The transcoding example not only represents a monolithic software system but also demonstrates a lack of flexibility. Imagine what would happen if one server with two transcoding apps went faulty, we would suddenly lose 25% of the processing capacity. We could spin up a spare server but that would take time to configure and license with the transcoding app keys from the faulty server. It is possible to use an array of virtual machines as this would provide an element of resilience, but there are still operational challenges in doing this.

Figure 1 – Virtual machines and containers differ in how they integrate with the hardware and their operating systems. Also, containers managing microservices are considered much more lightweight and versatile than VMs, allowing them to be easier to work with and simultaneously host on public clouds and private datacenters.

Figure 1 – Virtual machines and containers differ in how they integrate with the hardware and their operating systems. Also, containers managing microservices are considered much more lightweight and versatile than VMs, allowing them to be easier to work with and simultaneously host on public clouds and private datacenters.

Microservices solve these challenges completely as they not only abstract the software from any specific hardware, but when combined with orchestration also act as a complete ecosystem of system management of functionality. Microservices break the applications into modular components that can be simultaneously distributed over the public cloud and on-prem datacenters. From the perspective of the user, they don’t know or care where the physical application resides, and this is one of the major benefits of microservices. Also, when combined with containerization and orchestration such as Kubernetes or Docker, a completely resilient system becomes available that is capable of self-healing should the hardware fail or by creating new services when peak demand needs to be delivered.

Hybrid On-Prem And Public Cloud

In the broadcast transcoder example, the facility may spend 80% of its time requiring only two transcoding apps and then for 20% of the time may need eight transcoding apps to meet peak demand. Private on-prem datacenters often deliver optimal performance and cost when delivering services that do not vary too much from the average. Yes, we could design an on-prem datacenter to meet peak demand, but as we’ve seen, this is often a costly and inefficient exercise, mainly due to the additional investment in hardware and support costs. A better solution would be to provide on-prem datacenter resources for the average demand and public or private compute capacity for the peak. And this can be easily facilitated by containers and microservices.

For 80% of the time the microservices would reside only in the on-prem datacenter and when more resources are required to meet the demands of Saturday evening, more transcoder apps can, for instance, be created in the public cloud within the containerization system. And when Sunday morning appears, the cloud services could be simply deleted. This is a far more efficient method of working as the peak demand requirements do not have to be built into the on-prem datacenter.

It will soon be possible to run big chunks of a hitherto physical infrastructure in the public cloud, and some broadcasters are eager to embrace this approach, while others are more comfortable with a private cloud, i.e. compute power only they can access for a variety of tasks. The power of microservices is that they give broadcasters the choice of how they use their available infrastructure.

In the next article in this series, we will dig deeper into orchestration, containers, and the benefits they deliver for microservices.

Part of a series supported by

You might also like...

NAB Show 2024 BEIT Sessions Part 2: New Broadcast Technologies

The most tightly focused and fresh technical information for TV engineers at the NAB Show will be analyzed, discussed, and explained during the four days of BEIT sessions. It’s the best opportunity on Earth to learn from and question i…

Standards: Part 6 - About The ISO 14496 – MPEG-4 Standard

This article describes the various parts of the MPEG-4 standard and discusses how it is much more than a video codec. MPEG-4 describes a sophisticated interactive multimedia platform for deployment on digital TV and the Internet.

Chris Brown Discusses The Themes Of The 2024 NAB Show

The Broadcast Bridge sat down with Chris Brown, executive vice president and managing director, NAB Global Connections and Events to discuss this year’s gathering April 13-17 (show floor open April 14-17) and how the industry looks to the show e…

Standards: Part 4 - Standards For Media Container Files

This article describes the various codecs in common use and their symbiotic relationship to the media container files which are essential when it comes to packaging the resulting content for storage or delivery.

Standards: Appendix E - File Extensions Vs. Container Formats

This list of file container formats and their extensions is not exhaustive but it does describe the important ones whose standards are in everyday use in a broadcasting environment.