"The microservice concept offers the opportunity to build an automated workflow."
Microservices enable broadcasters to find new ways to adopt, engineer, operate and maintain the value of their solutions. For vendors, microservices provides opportunities to offer what could essentially be a self-serve menu for clients rather than building bespoke workflows internally. The impact on the service that will be delivered by broadcasters five to 10 years from now could be dramatic. BroadcastBridge reports.
Traditionally, broadcasters have had to wrestle with managing ‘heavy workflows and often bespoke code and integrations. These were usually ‘single use’ and not typically reusable elsewhere. Implementing microservices provides the opportunity to construct workflows from a variety of functional components and services and to reuse them across all the workflows within an operation.
Properly implemented, microservices deliver several key benefits such as re-usability, speed to implement, increased flexibility in that functional parts of a workflow can be more easily swapped out, almost on-the-fly without the need to completely re-engineer the changed workflow. Another benefit is the simplification of regression testing applied to integrations, but also within internal code, where there can be thousands of microservices within a product’s function library.
One example is of a user needing to deliver content to a non-linear distribution point. Craig Bury, CTO at consultancy and software services developer Three Media explains that five years ago, this process would likely have been one workflow with a ‘loose’ integration to a transcoder and perhaps a file transfer manager, maybe even watch folders.
“The user would have had little variance of options in that delivery,” he says. “A change to the deliverables to a platform, even a minor one, would typically mean the workflow would have to be completely re-started from scratch, manually, as there were no branched options to manage the changed area only.”
The microservice concept offers the opportunity to build an automated workflow with very granular steps and branching, with the re-use of similar services across each step, with tight API integration to all platform components. Now, when a change to a deliverable is required, only that area needs to be re-run, there is no total re-work, and this can be controlled via API, with the request made direct from a user.
“For example, this would give a user the ability to easily and quickly shuffle the order of operation, change priorities, start and end dates, bumpers or packaging, delivery file formats, etc,” Bury says. “The fine-grained self-serve user control enables change quickly and cost effectively, which is where the benefits lie.”
Prior to the rise of OTT, the TV industry has largely focused on performance and trying to squeeze more TV channels onto a constrained network. This has traditionally focused on a CAPEX play with investment in various physical hardware appliances to meet a tightly defined set of infrastructure requirements.
The last five years have seen content providers and operators start to recognise that flexibility and scale is vital to operating in an agile media landscape. This has led to more use of off-the-shelf COTS hardware along with a software-centric approach that enables a mix of datacentres, on-prem, private or public cloud with unified management and tools.
“As soon as the industry embraces the use of COTS hardware as a foundation infrastructure, it becomes possible to use an IT toolset that enables automation, and replication,” says Arnaud Caron, Director, Portfolio Transformation, MediaKind. “This process also applies to the software application stack when it is transformed to true microservices, as it is containerised and orchestrated.
A clear example is the expansion of a video headend to add more channels for satellite distribution. Caron says that in recent years, operators have undergone capacity planning: dedicated encoding hardware, network expansions/routing adjustment, manual reshuffling of statistical multiplexer, adaptation of the monitoring layer for expansion, careful configuration of the video service on hardware, to name a few.
“With cloud and orchestration, these operations can be automated and rationalised. The process of adding a node is a standard operation but the process of adding it to the video cluster and adjusting the video workflow is now increasingly automated. This relies on orchestration.”
The key benefit of microservices-designed architectures to broadcasters is flexibility. The industry is well aware of the fast-paced nature of broadcast content, especially in live environments, where content needs to be distributed to linear TV channels and IPTV streams quickly, efficiently and at a high-enough quality to tick off the quality of experience demanded by the average viewer.
“By using microservices to create modularity in application development, something that wasn’t available under the monolithic approach of the past, it allows broadcasters the opportunity to work in an environment with a software development community, pushing forward a more collaborative and flexible environment,” says Joop Janssen, CEO at Aperi.
“Broadcasters can now get to innovation software speeds that were previously defined by delays in hardware deployments. Microservices are the next logical step for broadcasters.”
There are a number of key challenges to overcome before we can realise the potential of microservices and the cloud in the broadcast sector. Although IP adoption has accelerated, industry-specific interfaces and protocols such as SDI are still a dominant transport technology for the production and facilities, particularly for uncompressed video. Dependence on SDI impedes the ability to scale and grow operations efficiently and leads to separate broadcast and IT infrastructures that further inhibits flexibility.
Another impact is at the human and organisational layers; “the need to build the right skillset within the organisation, potentially shifting from engineering towards managed services; removing siloed organisations to shared practices,” says Caron.
“New market entrants that have purely started as OTT, SVOD, VOD services are able to offer compelling services with a shorter time to market because they are built in the cloud,” he says. “They are based on both IT technologies and IT practices, which have been specifically adapted to media.”
Barriers to Adoption
Initial barriers to adoption of microservices tend to surface when a process improvement initiative identifies one or more functions within a monolithic system which requires a change or upgrading.
“The desired changes often require a top to bottom replacement or upgrade of the entire application stack rather than simply replacing or changing functional components of the application,” Bury says. “In most instances a top to bottom replacement would involve significant time and cost and require significant integration work to deliver new workflows and integrations and, ensure continued functionality of critical external business systems such as scheduling, contract management and finance and accounting. The cost and time to change would need to be assessed alongside the commercial pressures to deliver new workflows and functionality for clients. Unless significant investment is made the client deliverables will more often than not take priority.”
Perhaps the key development impacting the evolution of microservices is the rapid increase in the adoption of containerisation (such as Docker, Kubernetes) and the maturing of other serverless functions. This transition will further drive down costs related to integration and hosting microservices, either in the cloud or in on-premises implementations. It should also increase the breadth of services and types of functionality and increase flexibility and decrease time to deploy, while driving down costs.
In Five Years’ Time
By 2025, thousands of microservices could exist within a product, Bury suggests, “all exposed externally as well as internally via a well behaved and well documented set of API’s.
“Clients and their users will be able to build their own workflows to drive the vendor system simply by calling the various system API’s, thus eliminating the burden of near total vendor lock in. This dramatically changes the vendor landscape and offerings as there will be little or no on-going bespoke work required. User interfaces will be built to provide views to support this flexibility, supported by extended data schemas to accommodate this approach. This takes self-serve to another level with full control having moved to the client.”
Microservice will likely give broadcasters access to ubiquitous infrastructure, full flexibility, service agility and endless scalability. The impact on the service that will be delivered five to 10 years from now could be dramatic.
“With cloud elasticity, you’re not tied to just one linear broadcast channel, but you can create as many OTT variants as you wish to better match audience demographics,” says Eric Gallier, Vice President, Video Customer Solutions, at Harmonic. “Obviously, it is easy to create event-based, pop-up channels that have already been used extensively during major sports events. The rights for premium sports content are becoming so expensive and complex that distribution of the program directly to end users or to super aggregator MVPD partners is now calling for sophisticated methods to describe per-event distribution rights and potentially enforce those rights at the edge by either blacking out the content or replacing it with alternate programming.
Gallier adds, “Advertising monetization is moving to targeted models, but somewhat slowly because of flexibility and scalability issues. But the issue of scalability is going to disappear over time thanks to cloud distribution. Only cloud deployment can enable the kind of elasticity that is required by targeted advertisement workflows during the ad break of a premium sports event. Advertisers want to see more efficient use of spending dollars, so the quest for highly accurate targeting is only going to accelerate. The operation of replacing black-out and targeted advertising content requires cloud elasticity and scalability.
Video quality is another area where the cloud can be leveraged to generate (i.e., transcode) different variants of the same program to match the end-user device capability, which seems to be more and more fractured in terms of codec support (i.e., HEVC, AV1, VVC, EVC), resolution up to 4K and even 8K, HDR with HLG, HDR-10, Dolby Vision. Combine that with the different versions of streaming formats (i.e., HLS or DASH) and the different DRMs and you end up with an ever-increasing number of combinations that only a cloud solution can manage. Five years from now we’ll see a significant increase in the number of people watching streaming broadcast content. It is very likely that the streaming service will be much more personalized than today. The main question facing broadcasters then will not be whether cloud is a good solution, but which cloud solution is the best to manage a service that requires delivering personalized content based on analytics and other data to mass audiences. Only an extremely agile solution will be up to the challenge.”
You might also like...
Superficially, level seems to be a simple subject: just a reading on a meter. In practice, there’s a lot more to it. Level matters because if it is wrong, sound quality can suffer, things can get damaged or cause…
Here we look at one of the first practical error-correcting codes to find wide usage. Richard Hamming worked with early computers and became frustrated when errors made them crash. The rest is history.
Error correction is fascinating not least because it involves concepts that are not much used elsewhere, along with some idiomatic terminology that needs careful definition.
Errors are handled in real channels by a combination of techniques and it is the overall result that matters. This means that different media and channels can have completely different approaches to the problem, yet still deliver reliable data.
In the data recording or transmission fields, any time a recovered bit is not the same as what was supplied to the channel, there has been an error. Different types of data have different tolerances to error. Any time the…