Scalable Dynamic Software For Broadcasters: Part 9 - Integrating On- & Off-prem Datacenters & Cloud

Flexibility and scalability are at the core of microservice architectures. To truly deliver on this, broadcasters will want to leverage the power of on-prem and off-prem hybrid solutions.

When considering on-prem and off-prem datacenters, we must look a little closer at how they’re being used to determine their classification. Although an off-prem infrastructure can often be thought of as cloud system, that is not always the case.

A service provider can supply anything from bare racks with power, network connectivity, air conditioning, and building security, where the broadcaster will install their own servers and network, to serverless systems where the broadcaster only needs to manage the microservices deployment with little regard for the underlying hardware.

A similar scenario exists with on-prem datacenters. The broadcaster may operate their system as a cloud type infrastructure with virtualization and scalability, but they are still responsible for installation and maintenance of the entire facility. With on-prem, the broadcaster will be responsible for the network connectivity, power, air conditioning, security, and fire suppressant systems. Although these are all achievable for most broadcasters, as they’re used to working in 24/7 mission-critical systems, the complexity of on-prem systems should not be underestimated.

That said, the on-prem system generally provides much more control for a broadcaster than the off-prem equivalent, but it will always have limited capacity. No matter how much the broadcaster tries to future proof their facility, one of the compelling reasons to moving to IP is that it means that broadcasters do not have to think too much about the viewer requirements in ten years. With this in mind, many broadcasters are finding the concept of the off-prem and on-prem hybrid model compelling.

Hybrid Infrastructures

Combining on-prem and off-prem systems seems like the perfect solution. Using on-prem means the broadcaster has more control over their infrastructure and they can significantly reduce cloud egress and ingress costs, while at the same time they can move data quickly to and from local storage. During times of peak demand, which is inevitable for any broadcasters, they can divert some of their workflow traffic to the off-prem facility.

Off-prem cloud systems excel when they are scaling up and down as the number of viewers increases and decreases. Where they don’t do so well is when the workflow is static. This doesn’t mean that the broadcaster cannot use a cloud infrastructure entirely for a static workflow, but it just requires a little more thinking about in terms of the overall structure of the technology solution. There are many costs associated with running the technical side of a broadcast infrastructure and these might cover the procurement of an on-prem datacenter, or they may not. It all depends on the individual requirement of the broadcasters.

The great news is that hybrid on-prem and off-prem infrastructures provide broadcasters with a multitude of options, and it’s for the broadcaster to determine the best route for themselves.

Load Balancing Principles

If we assume the broadcaster has chosen a hybrid infrastructure approach where the static part of the workflow resides on-prem and they have the option of scaling into the off-prem when needed, then how is this achieved? It’s all well and good declaring that the infrastructure must scale, but what does this mean in real terms?

There are two problems to solve: the off-prem infrastructure must deliver new resource, and the workflow traffic must be diverted to it. Scaling the infrastructure is an intrinsic property of the orchestration system, but to achieve the diversion of the traffic, load balancing is used.

Load balancing is the method of distributing data and control information between a client and a server. And as microservices are stateless and their functionality is abstracted away from the underlying hardware, they lend themselves perfectly to hybrid on-prem and off-prem operation.

With a container and microservice architecture such as Kubernetes, the node maps to a server, and this may be virtualized or physical. The node runs the pods, which in turn manage the containers and individual microservice applications. Consequently, part of the workflow planning is determining which microservices lend themselves well to operating in an off-prem environment. This is particularly important when the broadcaster considers where the media assets are stored as costs could easily soar if they are continuously transferred to and from the off-prem and on-prem facilities. It’s inevitable that some transfer will take place, but the ingress and egress must be kept to a minimum, so storage optimization is critical.

Figure 1 shows how the workflow traffic is spread between multiple microservices providing the same service. The stateless nature of the microservice means that as jobs are created in the workflow, they can be sent to the load balancer, which in turn decides which microservice to send the job to by specifying the microservices IP address.

In the case of the ingest workflow where the received file must be converted to the broadcaster’s mezzanine format, the transcoders will probably reside within the same pod design. Transcoders are CPU (and sometimes GPU) intensive and require large amounts of local memory. The pod design will be tuned to providing these resources from the node.

This pod model can reside on any node, and the node can reside on any server whether on-prem, off-prem, virtualized or physical. Having this level of flexibility allows the transcoder node to exist in the broadcasters on-prem datacenter or the third-party providers off-prem datacenter.

Figure 1 – The load balancer acts as an interface to the user, so they do not need to know which node or pod the microservice is running on. If there is capacity within the on-prem datacenter, then more nodes can be created and added to the load balancer.

Figure 1 – The load balancer acts as an interface to the user, so they do not need to know which node or pod the microservice is running on. If there is capacity within the on-prem datacenter, then more nodes can be created and added to the load balancer.

Off-Prem Load Balancing

There are many third-party off-prem suppliers who provide serverless computing that facilitates microservice architectures. The term serverless is somewhat misguided as servers are still being used, it’s just that the provider is delivering a service-based solution instead of a server solution. This leaves the broadcaster to focus on the applications and not get bogged down with configuring hardware. Serverless computing is also known as Function as a Service (FaaS), and this in turn provides containerized architectures such as Kubernetes.

Figure 2 – Extending from figure-1, a node is added in an off-prem datacenter to the load balancer. As the load balancer is effectively routing IP packets, it doesn’t matter whether the node is on-prem or off-prem. Care must be taken when determining where the storage is allocated, otherwise there may well be excessive ingress and egress.

Figure 2 – Extending from figure-1, a node is added in an off-prem datacenter to the load balancer. As the load balancer is effectively routing IP packets, it doesn’t matter whether the node is on-prem or off-prem. Care must be taken when determining where the storage is allocated, otherwise there may well be excessive ingress and egress.

If a broadcaster is using a bare-bones off-prem rack system, they must not only provision the physical servers but decide on how they are going to facilitate the containerized architectures.

How the containers are provisioned within the off-prem datacenter, depends to a large extent on how quickly the broadcaster is going to need the scaled resource. And the cost is proportional to the speed with which the resource becomes available. If a number of servers are kept on standby in the public cloud with a specific containerized deployment, then their availability is going to be in the order of milliseconds. But if the servers need to be created and spun up with a specific containerized deployment, the speed of availability can stretch to five or ten minutes.

The stateless nature of microservices allows this scaling. Furthermore, the broadcaster can scale to multiple and different third-party vendors. This not only reduces their risk, but also allows them to choose the most cost-effective service provider.

Containers and microservices not only provide scalable resource for broadcasters, but they can achieve this over multiple vendors, and their own on-prem facility.

Part of a series supported by

You might also like...

An Introduction To Network Observability

The more complex and intricate IP networks and cloud infrastructures become, the greater the potential for unwelcome dynamics in the system, and the greater the need for rich, reliable, real-time data about performance and error rates.

Designing IP Broadcast Systems: Part 3 - Designing For Everyday Operation

Welcome to the third part of ‘Designing IP Broadcast Systems’ - a major 18 article exploration of the technology needed to create practical IP based broadcast production systems. Part 3 discusses some of the key challenges of designing network systems to support eve…

What Are The Long-Term Implications Of AI For Broadcast?

We’ve all witnessed its phenomenal growth recently. The question is: how do we manage the process of adopting and adjusting to AI in the broadcasting industry? This article is more about our approach than specific examples of AI integration;…

Next-Gen 5G Contribution: Part 2 - MEC & The Disruptive Potential Of 5G

The migration of the core network functionality of 5G to virtualized or cloud-native infrastructure opens up new capabilities like MEC which have the potential to disrupt current approaches to remote production contribution networks.

Standards: Part 8 - Standards For Designing & Building DAM Workflows

This article is all about content/asset management systems and their workflow. Most broadcasters will invest in a proprietary vendor solution. This article is designed to foster a better understanding of how such systems work, and offers some alternate thinking…