Scalable Dynamic Software For Broadcasters: Part 9 - Integrating On- & Off-prem Datacenters & Cloud

Flexibility and scalability are at the core of microservice architectures. To truly deliver on this, broadcasters will want to leverage the power of on-prem and off-prem hybrid solutions.

When considering on-prem and off-prem datacenters, we must look a little closer at how they’re being used to determine their classification. Although an off-prem infrastructure can often be thought of as cloud system, that is not always the case.

A service provider can supply anything from bare racks with power, network connectivity, air conditioning, and building security, where the broadcaster will install their own servers and network, to serverless systems where the broadcaster only needs to manage the microservices deployment with little regard for the underlying hardware.

A similar scenario exists with on-prem datacenters. The broadcaster may operate their system as a cloud type infrastructure with virtualization and scalability, but they are still responsible for installation and maintenance of the entire facility. With on-prem, the broadcaster will be responsible for the network connectivity, power, air conditioning, security, and fire suppressant systems. Although these are all achievable for most broadcasters, as they’re used to working in 24/7 mission-critical systems, the complexity of on-prem systems should not be underestimated.

That said, the on-prem system generally provides much more control for a broadcaster than the off-prem equivalent, but it will always have limited capacity. No matter how much the broadcaster tries to future proof their facility, one of the compelling reasons to moving to IP is that it means that broadcasters do not have to think too much about the viewer requirements in ten years. With this in mind, many broadcasters are finding the concept of the off-prem and on-prem hybrid model compelling.

Hybrid Infrastructures

Combining on-prem and off-prem systems seems like the perfect solution. Using on-prem means the broadcaster has more control over their infrastructure and they can significantly reduce cloud egress and ingress costs, while at the same time they can move data quickly to and from local storage. During times of peak demand, which is inevitable for any broadcasters, they can divert some of their workflow traffic to the off-prem facility.

Off-prem cloud systems excel when they are scaling up and down as the number of viewers increases and decreases. Where they don’t do so well is when the workflow is static. This doesn’t mean that the broadcaster cannot use a cloud infrastructure entirely for a static workflow, but it just requires a little more thinking about in terms of the overall structure of the technology solution. There are many costs associated with running the technical side of a broadcast infrastructure and these might cover the procurement of an on-prem datacenter, or they may not. It all depends on the individual requirement of the broadcasters.

The great news is that hybrid on-prem and off-prem infrastructures provide broadcasters with a multitude of options, and it’s for the broadcaster to determine the best route for themselves.

Load Balancing Principles

If we assume the broadcaster has chosen a hybrid infrastructure approach where the static part of the workflow resides on-prem and they have the option of scaling into the off-prem when needed, then how is this achieved? It’s all well and good declaring that the infrastructure must scale, but what does this mean in real terms?

There are two problems to solve: the off-prem infrastructure must deliver new resource, and the workflow traffic must be diverted to it. Scaling the infrastructure is an intrinsic property of the orchestration system, but to achieve the diversion of the traffic, load balancing is used.

Load balancing is the method of distributing data and control information between a client and a server. And as microservices are stateless and their functionality is abstracted away from the underlying hardware, they lend themselves perfectly to hybrid on-prem and off-prem operation.

With a container and microservice architecture such as Kubernetes, the node maps to a server, and this may be virtualized or physical. The node runs the pods, which in turn manage the containers and individual microservice applications. Consequently, part of the workflow planning is determining which microservices lend themselves well to operating in an off-prem environment. This is particularly important when the broadcaster considers where the media assets are stored as costs could easily soar if they are continuously transferred to and from the off-prem and on-prem facilities. It’s inevitable that some transfer will take place, but the ingress and egress must be kept to a minimum, so storage optimization is critical.

Figure 1 shows how the workflow traffic is spread between multiple microservices providing the same service. The stateless nature of the microservice means that as jobs are created in the workflow, they can be sent to the load balancer, which in turn decides which microservice to send the job to by specifying the microservices IP address.

In the case of the ingest workflow where the received file must be converted to the broadcaster’s mezzanine format, the transcoders will probably reside within the same pod design. Transcoders are CPU (and sometimes GPU) intensive and require large amounts of local memory. The pod design will be tuned to providing these resources from the node.

This pod model can reside on any node, and the node can reside on any server whether on-prem, off-prem, virtualized or physical. Having this level of flexibility allows the transcoder node to exist in the broadcasters on-prem datacenter or the third-party providers off-prem datacenter.

Figure 1 – The load balancer acts as an interface to the user, so they do not need to know which node or pod the microservice is running on. If there is capacity within the on-prem datacenter, then more nodes can be created and added to the load balancer.

Figure 1 – The load balancer acts as an interface to the user, so they do not need to know which node or pod the microservice is running on. If there is capacity within the on-prem datacenter, then more nodes can be created and added to the load balancer.

Off-Prem Load Balancing

There are many third-party off-prem suppliers who provide serverless computing that facilitates microservice architectures. The term serverless is somewhat misguided as servers are still being used, it’s just that the provider is delivering a service-based solution instead of a server solution. This leaves the broadcaster to focus on the applications and not get bogged down with configuring hardware. Serverless computing is also known as Function as a Service (FaaS), and this in turn provides containerized architectures such as Kubernetes.

Figure 2 – Extending from figure-1, a node is added in an off-prem datacenter to the load balancer. As the load balancer is effectively routing IP packets, it doesn’t matter whether the node is on-prem or off-prem. Care must be taken when determining where the storage is allocated, otherwise there may well be excessive ingress and egress.

Figure 2 – Extending from figure-1, a node is added in an off-prem datacenter to the load balancer. As the load balancer is effectively routing IP packets, it doesn’t matter whether the node is on-prem or off-prem. Care must be taken when determining where the storage is allocated, otherwise there may well be excessive ingress and egress.

If a broadcaster is using a bare-bones off-prem rack system, they must not only provision the physical servers but decide on how they are going to facilitate the containerized architectures.

How the containers are provisioned within the off-prem datacenter, depends to a large extent on how quickly the broadcaster is going to need the scaled resource. And the cost is proportional to the speed with which the resource becomes available. If a number of servers are kept on standby in the public cloud with a specific containerized deployment, then their availability is going to be in the order of milliseconds. But if the servers need to be created and spun up with a specific containerized deployment, the speed of availability can stretch to five or ten minutes.

The stateless nature of microservices allows this scaling. Furthermore, the broadcaster can scale to multiple and different third-party vendors. This not only reduces their risk, but also allows them to choose the most cost-effective service provider.

Containers and microservices not only provide scalable resource for broadcasters, but they can achieve this over multiple vendors, and their own on-prem facility.

Part of a series supported by

You might also like...

NAB Show 2024 BEIT Sessions Part 2: New Broadcast Technologies

The most tightly focused and fresh technical information for TV engineers at the NAB Show will be analyzed, discussed, and explained during the four days of BEIT sessions. It’s the best opportunity on Earth to learn from and question i…

Standards: Part 6 - About The ISO 14496 – MPEG-4 Standard

This article describes the various parts of the MPEG-4 standard and discusses how it is much more than a video codec. MPEG-4 describes a sophisticated interactive multimedia platform for deployment on digital TV and the Internet.

The Big Guide To OTT: Part 9 - Quality Of Experience (QoE)

Part 9 of The Big Guide To OTT features a pair of in-depth articles which discuss how a data driven understanding of the consumer experience is vital and how poor quality streaming loses viewers.

Chris Brown Discusses The Themes Of The 2024 NAB Show

The Broadcast Bridge sat down with Chris Brown, executive vice president and managing director, NAB Global Connections and Events to discuss this year’s gathering April 13-17 (show floor open April 14-17) and how the industry looks to the show e…

Essential Guide: Next-Gen 5G Contribution

This Essential Guide explores the technology of 5G and its ongoing roll out. It discusses the technical reasons why 5G has become the new standard in roaming contribution, and explores the potential disruptive impact 5G and MEC could have on…