Scalable Dynamic Software For Broadcasters: Part 12 - Continuous Delivery
Traditional monolithic software applications were often difficult to maintain and upgrade. In part, this was due to the massive interdependencies within the code that required the entire application to be upgraded and restarted resulting in down-time that regularly created many headaches for the software developers and users. Consequently, software upgrades and releases were delayed until they really needed to be done, resulting in an inherent fear of most changes to the application.
All 12 articles in this series are now available in our free 88 page eBook ‘Scalable Dynamic Software For Broadcasters’ – download it HERE.
All articles are also available individually:
Continuous delivery is a methodology that both fixes the challenges faced by monolithic software deployment and delivers many new advantages, including reducing the risk for software releases, delivering new features to market much more quickly, maintaining quality and often improving it, reducing costs, and providing a much better product for the user.
Container and microservice architectures facilitate continuous delivery for two main reasons: they are self-contained applications, and they have well defined interfaces to facilitate control and monitoring.
The self-contained nature of microservice applications not only helps with writing of the code, but also testing and integration. Monolithic applications do have many sub programs running within in them, however, the whole codebase generally had to be compiled and this made testing challenging. Although unit-testing allowed for some sub-program testing, the testing vectors were often complex and little was done to enforce inter-program communication and data exchange. Consequently, the whole application had to be tested prior to release resulting in a highly complex process. As it was almost impossible to test every combination of signal and user input, new bugs were a regular occurrence. And if one part of the program failed, then the entire application failed.
Localized Testing
Microservice architectures have taken the sub-program idea to a new level. They can be thought of as sub-programs, but only the microservices that had been recently updated would need to be re-compiled. The enforcement of the API makes testing much easier as the test vectors are defined as part of the design. For example, if a proc-amp can only work with Rec.601 format video, it will only need to be tested with Rec.601 test vectors. Furthermore, if the proc-amp microservice fails for some reason due to a bug, then only that microservice would fail, it certainly wouldn’t take down the whole microservice architecture.
In the instance where the microservice crashed, the workflow job associated with it would also probably fail, but this would be self-contained. Also, the microservice would be creating metadata that could be recorded as part of an intelligent monitoring system, thus allowing the developer or DevOps teams to identify the problem and fix it quickly.
Containerization also helps enormously as the microservices are run within a well-defined environment with the operating system specific libraries. One of the challenges with the monolithic design was the issues around library management as these were upgraded and changed too. It might be that one part of the program works with only a specific library whereas others will only work with another specific version. Managing this level of granularity within a code library is very difficult and often led to unpredictable incompatibility issues.
Library management is much easier in containerized systems as the operating system specific libraries of the microservices are held within the container. It doesn’t matter if a proc-amp microservice runs a different version of the math.h library than the color corrector. It would be difficult to hold them within the same container, but assuming they were held in different containers, no conflicts would arise.
Figure 1 – Microservice applications lend themselves well to continuous delivery as they can be designed, tested, and deployed in isolation so that should a problem occur, their impact on the rest of the workflow and infrastructure is minimal.
Varying Versions
In the ideal world all libraries would be the same version throughout the whole codebase, but this is almost impossible to achieve. Therefore, allowing microservice applications to specify their own library versions in the knowledge that upgrading them won’t have any repercussions for other microservices is a massive win for software reliability.
Testing then becomes much more reliable as the containers and microservices can be verified in isolation. If the math.h library needs to be upgraded for the proc-amp, the microservice in its container is tested independently of every other microservice. If the proc-amp microservice was to fail when it was deployed, then the reason for failing would be determined by the DevOps teams and the test vectors updated accordingly so that this problem didn’t happen again.
Different microservice versions can be deployed that provide the same service as part of a controlled deployment strategy. For example, if the proc-amp was upgraded to version 2.2 from version 2.1 to fix a minor truncation error in the multiplier, then version 2.1 would continue to be operational while the DevOps team would test version 2.2 for one workflow job. This allows version 2.2 to be tested for a short period before it is fully released and so if it did unexpectedly fail, then its impact would be well contained and negligible on the rest of the infrastructure.
Reducing Risk and Maintaining Quality
The ability to manage and upgrade microservice functions in well contained and isolated systems greatly reduces the risk of failure for the rest of the system. The Continuous Delivery model promotes regular software deployment, but only in the context of limiting the impact on the rest of the architecture.
By regularly deploying code, upgrading systems greatly reduces the risk to the wider broadcast workflows. Instead of the entire software development team and the wider company going on standby every time a new version of the software is released with the expectation of a major disaster, regular deployment promotes confidence as the development team and users within the broadcast facility soon realize that if something goes wrong, then the ramifications are greatly constrained.
Regularly deploying code also leads to a higher-quality product. In the case of broadcast workflows, this might mean greater efficiency and even higher-quality video and audio processing. The principal reason is that as the code is regularly upgraded as part of a continuous delivery methodology, there is ample opportunity to fix any bugs that may creep into the software. It’s not unusual for a microservice to have two or three deployments in a single day.
There is an argument to ask why don’t developers just design less bugs? Modern-day software-based systems are so complex and dynamic, it’s almost impossible for each person in the development and DevOps teams to know exactly how every process works and interacts. Adopting a continuous delivery approach allows bugs to be identified and fixed very quickly, with minimal impact to the rest of the broadcast infrastructure. It must be said that the code is still written to a very high standard and continuous delivery soon exposes developers who are not up to the job.
Furthermore, there is now a drive to distribute software development teams throughout the world, especially where follow-the-sun business models are required. The expertise is globally distributed, allowing higher quality microservice applications to be designed and is therefore not limited to the geographical location of the vendor or broadcaster. For example, a developer who excels in proc-amp design isn’t necessarily going to excel in audio filter design.
Keeping functions constrained within small, easy-to-manage microservice and container architectures with a continuous delivery mindset also encourages a faster time to market for the product. Whether it’s a workflow upgrade or a video and audio processing function, regular deployment allows users and product managers to provide fast and reliable feedback. This allows services to be upgraded quickly and bugs ironed out with lightning speed.
Continuous delivery works hand in hand with microservice architectures and its methodologies promote the rapid deployment of reliable code while significantly reducing the risks to the wider broadcast infrastructure.
Part of a series supported by
You might also like...
The Big Guide To OTT - The Book
The Big Guide To OTT ‘The Book’ provides deep insights into the technology that is enabling a new media industry. The Book is a huge collection of technical reference content. It contains 31 articles (216 pages… 64,000 words!) that exhaustively explore the technology and…
Audio For Broadcast: Cloud Based Audio
With several industry leading audio vendors demonstrating milestone product releases based on new technology at the 2024 NAB Show, the evolution of cloud-based audio took a significant step forward. In light of these developments the article below replaces previously published content…
An Introduction To Network Observability
The more complex and intricate IP networks and cloud infrastructures become, the greater the potential for unwelcome dynamics in the system, and the greater the need for rich, reliable, real-time data about performance and error rates.
Next-Gen 5G Contribution: Part 2 - MEC & The Disruptive Potential Of 5G
The migration of the core network functionality of 5G to virtualized or cloud-native infrastructure opens up new capabilities like MEC which have the potential to disrupt current approaches to remote production contribution networks.
Next-Gen 5G Contribution: Part 1 - The Technology Of 5G
5G is a collection of standards that encompass a wide array of different use cases, across the entire spectrum of consumer and commercial users. Here we discuss the aspects of it that apply to live video contribution in broadcast production.