Microservices For Broadcasters - Part 1

Computer systems are driving forward broadcast innovation and the introduction of microservices is having a major impact on the way we think about software. This not only delivers improved productivity through more efficient workflow solutions for broadcasters, but also helps vendors to work more effectively to further improve the broadcaster experience.



This article was first published as part of Essential Guide: Microservices For Broadcasters - download the complete Essential Guide HERE.

One of the great advantages of moving to COTS systems and IT infrastructures is that we can benefit from developments in seemingly unrelated industries. Microservices have gained an impressive following in enterprise application development and many of the design methodologies transfer directly to broadcast infrastructures.

Scaling broadcast facilities has long been the goal of many system designers and television, by its very nature has times of peak demand when viewing audiences gather to watch high value programs such as Saturday night entertainment or prominent sports events. Traditionally, broadcasters would need to design their systems for the highest peak- demand events, often this was difficult to accomplish due to the massive number of unknown variables in the system leading to significantly increased costs and complexities.

Microservices are distributed software modules and combined with virtual machine infrastructures can easily
scale to deliver on-demand services to facilitate peak requirements. Furthermore, due to the functional nature of microservices, new processing systems can be developed independently of the rest of the software. This promotes specialist agile teams to safely develop specific functionality such as adding Rec.2020 color space to an existing video processing component.

Agile methodologies have been making significant inroads into all areas of software development and microservices benefit greatly from the agile philosophy. They encourage and deliver software based on relevant functionality as opposed to often outdated preconceived ideas from years earlier. Agile no longer uses the waterfall method of project management, further empowering software teams to change quickly to meet the varying and increasing requirements of broadcasters. New components can be quickly and safely developed and deployed as the modular nature encourages deep and efficient software testing.

Although many broadcast infrastructures have vendor commonality in their choice of studio, edit and playout design choice, they vary greatly in their workflow implementations leading to significant variations in broadcaster requirements. With traditional hardware and software solutions, broadcasters would often have to compromise their workflow requirements as bespoke code would be needed to facilitate them resulting in complex and difficult to manage systems. Microservices go a long way to rectify this as the agile and distributed nature of their design means software interfaces can be written with greater ease and significantly reduced risk.

Simplification is key to flexibility and scalability, and microservices definitely deliver this. Instead of having a huge monolithic code base with code variations to meet the specific needs of clients, microservices promote a generalized core code base that can be easily tested and verified. RESTful API’s with loosely coupled interfaces further improve the broadcaster experience as modifications and bespoke additions can be relatively easily facilitated.

To fully appreciate the advantages of microservices, it helps to understand how software teams have worked in the past, how software was built, and the associated risk of monolithic design. This Essential Guide explains the challenges faced by software teams using waterfall project management when delivering monolithic code and then goes on to discuss and describe how agile development and microservices deliver unprecedented flexibility and scalability for broadcasters.

Microservices deliver a huge benefit to broadcasters and are the future of software provision for any broadcaster. These articles will help you understand why.

Tony Orme
Editor, The Broadcast Bridge 


Software continues to dominate broadcast infrastructures. Control, signal distribution, and monitoring are driving software adoption, and one of the major advantages of moving to computer systems is that we can ride on the crest of the wave of IT innovation. In this Essential Guide, we investigate Microservices to understand them and gain a greater appreciation for their applications in broadcasting.

Understanding the benefits of microservices to end users and broadcasters requires some background knowledge of earlier software development, the philosophy of design, and how developers actually tackle solutions.

Traditional software architectures were monolithic in design. That is, there was one big homogenous version of the executable code that provided the full end to end user experience. It would accept inputs through the user interface, access data through some sort of database, accept information through input/output interfaces, process the data, and provide the user response.

Monolithic Code

A complete program was divided into many files to provide the source code. Using a compiler, each source file was processed in turn to provide a single executable file that would be executed by the host operating system for the computer.

Diagram 1 – For monolithic code, multiple files are compiled into object files and then linked with external libraries to provide a single executable file. Each developer may work on one or more source files simultaneously and as monolithic code is tightly coupled, they must make sure their functional interface designs and data formats are exactly the same. This can be the source of bugs, and compiler and linker issues due to the ripple effect

Diagram 1 – For monolithic code, multiple files are compiled into object files and then linked with external libraries to provide a single executable file. Each developer may work on one or more source files simultaneously and as monolithic code is tightly coupled, they must make sure their functional interface designs and data formats are exactly the same. This can be the source of bugs, and compiler and linker issues due to the ripple effect

To make development easier, monolithic code can be modularized through the use of libraries. One example of this is code written in C or C++ using static libraries. Code is divided into multiple functional units that can be compiled into object code. This is a sort of intermediate assembly code that is hardware and operating system dependent but contains labels instead of addresses for memory locations. Each copy of the object code is then joined by the linker to resolve the label memory addresses resulting in a single executable program.

A development of this system used dynamic libraries and two types are available; dynamically linked at run-time, and dynamically loaded and unloaded during execution. For dynamically linked programs, the libraries had to be available during the compile and linking phases, but the libraries are not included in the executable code distribution. Dynamically loaded and unloaded programs use a loader system function to access the libraries at execution time.

Modularity has always been a key requirement for developers as it promotes code reuse to improve efficiencies and reduce the possibility of bugs creeping in. If you already have a library that provides a function to provide low-level access to the ethernet port for example, then why bother re-writing it?

Although the library approach makes monolithic code modular, it still suffers from some severe restrictions.

Flexibility Demands

Providing reliable, efficient and flexible code is the goal for any vendor or software developer. The term “flexible” is key to understanding the limitations of traditional monolithic code development.

Developers in software teams building monolithic code cannot work in isolation. Although it may be possible to break the code into functional units to allow parallel development cycles, the functions must be tightly coupled. That is, the interface design to the function must be well defined before coding can start. As functional requirements change, the interfaces must change across the whole design. This can have consequences for the rest of the team and changing interface or data specifications in a monolithic design results in the ripple effect.

Developers may be working on many different parts of the code base at the same time. Systems such as unit testing do exist to allow a developer to independently test the function they are working on. But every so often, the whole team must stop, and a complete re-compile of the software is executed so that the entire program can be tested again.

Increasing Monolithic Complexity

Unit testing is the process of applying known test data to a function, or group of functions and confirming the test complies with a known result. This is all well and good, but the complexity of testing increases exponentially as the type of data being tested also increases. Consequently, it’s almost impossible to test every unit in isolation and expect the whole system to work. At some point, the whole code base must be re-compiled and tested.

One of the major challenges of compiling the monolithic code is establishing all the software interfaces still work and are correct after any changes. For example, if we modify a function called video_Proc(), in a previous version of the code, there may be three parameters passed to a function, but in the new version of the code there may be four. As monolithic code uses functions that are tightly coupled, every function using video_ Proc() will need to have its interface updated.

Diagram 2 – unit testing is part of a full testing strategy and allows individual components to be tested so that they can be validated to confirm they work in accordance with the design. Integration testing combines related components and functions to test for defects in the system (ripple effects will be seen here). System testing checks for compliance against the specified requirements of the software as a whole. Acceptance testing confirms the software works as the client expects.

Diagram 2 – unit testing is part of a full testing strategy and allows individual components to be tested so that they can be validated to confirm they work in accordance with the design. Integration testing combines related components and functions to test for defects in the system (ripple effects will be seen here). System testing checks for compliance against the specified requirements of the software as a whole. Acceptance testing confirms the software works as the client expects.

It may take some time to work through the whole code base to find all references to this function, change the number of parameters passed to it and recompile. Even with modest sized software teams, the code base can soon expand into tens and hundreds of thousands of lines of code. Solutions such as polymorphism exist to overcome this, but such object- oriented design philosophies soon become complex and difficult to manage and have their own challenges.

This complexity is undesirable and leads to slow release times and difficult to manage code, furthermore, it’s very difficult to meet the unique and specific demands of individual clients.

In the ideal world, a vendor would be able to provide a single version of code for every one of their clients. With small applications such as phone apps this is possible. However, no two broadcast facilities are the same and workflows generally differ, even if the same infrastructure components and vendors are used. Localization in the form of best working practices and transmission formats all conspire to create individually complex broadcast systems. Consequently, vendors must provide flexibility in their code design to facilitate client requirements.

Hardware Development Is Slow

We rely on specifications such as SMPTE’s ST-2110 to provide common signal distribution for video and audio over IP distribution. These standards are often years in the planning and are generally static once released. They do get updated occasionally but they are usually always backwards compatible. New releases are relatively infrequent as they often result in hardware changes that can take many months to implement. However, users and clients have become used to much shorter development cycles for software-based products and are usually not willing to wait years for a solution.

Another consequence of the much shorter design cycle is that vendors tend to design their own data exchange and control interfaces and simply do not have the time to engage in committee meetings to agree the next MAM interface standard. And even if they did, the rapid development of current software technology means standards such of these would most probably be out of date even before they were published. Therefore, the software must be flexible to be able to interface to any other system.

This is possible in monolithic designs and a great deal of flexibility has been provided in recent years. However, the challenges for the development teams increase exponentially. To facilitate different control interfaces, unique to specific clients, the software teams must continually support the modules associated with that client for evermore.

Scalability Requirements

Another challenge monolithic code presents, is that of scalability. One of the key advantages cloud computing provides, whether public or private, is the ability to scale resource as and when we need it. As more user demand is placed on the code, the underlying resource supporting it must also be increased.

Monolithic code, can, to a certain extent, scale to meet increased user demands. This is achieved by increasing the number of instances of the code running behind a device such as a load balancer. The load balancer can detect the number of user requests and when they pass a certain threshold, spin up new instances of the code. This is how traditional web servers worked using solutions such as Apache. However, monolithic code cannot scale to meet the demands of increasing data volume as each instance of the program will need access to all of the data. This potentially makes memory management and caching inefficient and can lead to contended I/O access.

Also, different functions within the program may have different resource requirements. For example, a video compression function may be CPU intensive, whereas a video processing function may be GPU intensive. Monolithic code does not allow us to easily split the code into functional components to maintain scalability to this granularity, and hence efficiency.

Supported by

You might also like...