Scalable Dynamic Software For Broadcasters: Part 5 - Operating Systems And Containers
Modern software development is highly reliant on cross functional software teams. Maintaining a common operating system with dependencies that meet the needs of every developer is almost impossible, but containers go a great way to solve these challenges.
All 12 articles in this series are now available in our free 88 page eBook ‘Scalable Dynamic Software For Broadcasters’ – download it HERE.
All articles are also available individually:
In an ideal world, a software engineer would be able to write their code referencing generic libraries that were the same across multiple platforms. However, the fast-paced world of software development with agile methodologies can never reach this ideal. New libraries and dependencies are being updated and released all the time leading to continuous changes. This is especially evident when we consider the current focus on rapid development and feature release.
A software dependency has the potential to impact the reliability of the code quite significantly due to the potential for change. There are two types of dependency, those that we control, and those that we cannot. When writing code, the developer will choose a set of libraries that they know are up to date and provide the required task. However, what they cannot control is somebody updating the libraries while they are writing their code, resulting in the library being released after their own deployment. This has a potentially devastating effect on the reliability of the product feature as the library could be released with a function that causes unintended side effects to the developers’ code – or more succinctly, makes it unreliable.
Diagram 1 – A) if the luma gain library needs to be updated, the Proc Amp and Transcoder applications will have no knowledge of this and will load the new LumaGain app next time they are executed, this may result in a bug developing. B) using microservices, the LumaGain libraries are ring-fenced so if the library changes, then the Proc Amp and Transcoder versions of the LumaGain will not change, resulting in greater stability.
Even libraries that are written in-house, that is, those a developer would have control of, can also have the same sort of impact. Again, if a library is updated that has an unintended side effect and conflicts with the code providing a specific function, the whole code could be made unstable. There is an argument to suggest this is where testing and QA (Quality Assurance) plays a massive part. If the new library was tested with all the software functions that used it, then any errors would be found. To a certain extent this is correct, however, massive record keeping is required, and that is often not achievable, especially when developers are dispersed throughout the world.
To overcome this, containers provide a ring-fenced environment for developers to write their code using their own specific dependency versions. These versions stay with the container and form part of the operating environment for the microservice. This helps reliability enormously as the microservice can be tested with specific libraries that will not automatically change. If there are any changes in the underlying operating system of containerization software, then they would be so big that they wouldn’t happen automatically and would be subject to a massive change control process within the facility. This would see every software function tested exhaustively before a new release of the operating system.
This doesn’t mean the container deployment is static, far from it. Libraries and dependencies can be changed but they’re done so under strict controls. The developer making the change will be able to test the new library deployment within the tight working environment of their development bench, and when this has been proven to work, then they can release it into the containerization/microservices pool.
This demonstrates another beneficial side effect of microservices, that is, any changes in other microservices will have a limited impact on the rest of the system. For example, if during development, a transcoder microservice develops a bug, then the ramifications for that bug will be limited to the process it is running. If this is contrasted to a monolithic design, where one bug can have a catastrophic impact on the rest of the system, then microservices certainly demonstrate their resilience.
Host OS
Containerized microservices can only run on the operating system they were designed for. For example, a transcoder microservice developed using the Linux operating system will only work in a containerized Linux environment. And a transcoder developed for Windows will only work in a containerized Windows environment. In other words, Linux containers cannot run on a Windows host and Windows containers cannot run on a Linux host.
It is possible to create a VM with a Linux operating system on a Windows server allowing a Linux container to run on the VM (with Linux as its OS). It’s clear that this would add considerable overhead and potential latency to any microservice running in this environment, but the application demonstrates the versatility of containers and their portability across hardware and virtualized platforms.
Although keeping container and microservice operating systems aligned implies a limitation on the portability of containers, the microservice architecture can still be thought of as cross-platform. This is because the API with which the containers and microservices are controlled and accessed are platform-agnostic. The API runs on top of the HTTPS/TCP/IP protocols which work equally well on Windows, Linux, and MacOS platforms, thus making the operation of the containerized microservice platform independent. Furthermore, the operating system only needs to be a generic version and any specific libraries needed for the microservice can be kept within the microservice so that it stays lightweight and easy to deploy.
Why Use Dependencies And Shared Libraries?
Dependencies generally have three categories: libraries that are loaded at execution, libraries that are loaded as the program runs, and application-specific data. All three have the potential to wreak havoc in a software application but their history and why we use them lies in understanding some of the intricacies of monolithic software.
As more features were required, monolithic software would start to expand significantly to the point where feature bloat resulted in massive programs. This had three main problems: first, the code became physically large and would consume many gigabytes of storage, making it difficult to deploy and manage, secondly, the time taken to compile the code expanded significantly, and thirdly, team collaboration became more and more challenging.
One of the solutions to reducing the software storage footprint, moderating compilation times, and promoting developer collaboration was to use libraries. These were specific programs that are not available as user programs but are available to developers writing the code. The programs would be compiled to object code (the compilation stage that usually takes the most time), and then made available from a shared resource to all the developers. When they used them, they would only need to compile the code they were writing and link the precompiled libraries in the final stage of compilation and testing.
Libraries, often referred to as shared libraries form a critical part of the development cycle. However, they suffer one serious limitation, and that is they sometimes change. This might be to fix a bug or implement another feature, but they can change and if a developer has designed against an earlier version, there is no guarantee that their code will continue to work. As the libraries are shared across all applications within an operating system, changing it could have potential ramifications for every program that uses it. And this results in instability.
Microservices solve this by effectively taking a copy of the shared library they are including in their application and keeping it within the container. The microservice program will only ever use the shared libraries within its own container, thus creating greater stability.
Part of a series supported by
You might also like...
The Resolution Revolution
We can now capture video in much higher resolutions than we can transmit, distribute and display. But should we?
Microphones: Part 3 - Human Auditory System
To get the best out of a microphone it is important to understand how it differs from the human ear.
HDR Picture Fundamentals: Camera Technology
Understanding the terminology and technical theory of camera sensors & lenses is a key element of specifying systems to meet the consumer desire for High Dynamic Range.
IP Security For Broadcasters: Part 2 - The Problem To Be Solved
By assuming that IP must be made secure, we run the risk of missing a more fundamental question that is often overlooked: why is IP so insecure?
Standards: Part 22 - Inside AIFF Files
Compared with other popular standards in use, AIFF is ancient. The core functionality was stabilized over 30 years ago and remains unchanged.