Scalable Dynamic Software For Broadcasters: Part 4 - Virtualization vs. Containers

The differences between virtual machines (VMs) and containers are more than just the differences in how they utilize the underlying hardware.

VMs started to make a name for themselves when server hardware improved to the extent that the average application found it difficult to utilize all its resource. This may seem like a contradiction for broadcasters as we seem to continually invent new formats that exhaust hardware processing power. However, even broadcasters don’t use their servers 24/7 at full power.

Hypervisor And VMs

The hypervisor is the method of hardware arbitration and management used by VMs to make more efficient use of the hardware. It efficiently divides the underlying server’s hardware resource so that from the point of view of the user applications, it looks like the whole hardware resource is dedicated to that application. But this is not the case, multiple operating systems run on-top of the hypervisor, which may be a combination of hardware, software, and firmware, with the hypervisor time-division multiplexing the resource.

Type-1 and type-2 systems have emerged as the leading versions of hypervisor. Type-1 is the bare-metal hypervisor and runs directly on the hardware before any operating system is loaded. This allows it to intersect any software instructions that are meant specifically to run on a hardware resource, intercept it, and then schedule it to the resource. For example, three Linux operating systems may be running on a type-1 hypervisor all trying to send an IP packet to the Network Interface Card (NIC). The hypervisor would intersect these commands, queue them, then schedule them to be sent to the NIC in turn. When each IP message has been transmitted by the NIC, the resulting acknowledge-command will be relayed back to each operating system as required (see diagram 1).

Type-2 still uses this method but instead of the hypervisor sitting on-top of the bare-metal server, it runs on-top of the server’s own operating system. Although this method is incredibly versatile as it allows multiple operating systems to be run on a laptop or desktop computer as well as a server, it tends to be slow and suffers from higher latency as the commands have a lower-level layer of operating system to navigate through (see diagram 2).

Both type-1 and type-2 hypervisor VMs require complete operating systems to be created for the user applications. This may work well in terms of demarcation, especially when complete servers are needed (albeit virtual), but VMs tend to use gigabytes of storage and can take several minutes to create. Furthermore, the whole management of creating and removing VMs is often complex.

Diagram 1 – In a type-1 hypervisor VM, the hypervisor effectively acts as a hardware emulator for the operating systems that reside on the server. As far as each Linux OS is concerned, they are communicating with the hardware, but the hypervisor is intercepting the instructions to the hardware and then schedules the message as the hardware becomes available.

Diagram 1 – In a type-1 hypervisor VM, the hypervisor effectively acts as a hardware emulator for the operating systems that reside on the server. As far as each Linux OS is concerned, they are communicating with the hardware, but the hypervisor is intercepting the instructions to the hardware and then schedules the message as the hardware becomes available.

Diagram 2 – In a type-2 hypervisor VM, the hypervisor runs on-top of the server’s own operating system. This could even be a macOS for example, and as well as running all its own applications, it is also running the hypervisor which in turn is managing three Linux operating systems.

Diagram 2 – In a type-2 hypervisor VM, the hypervisor runs on-top of the server’s own operating system. This could even be a macOS for example, and as well as running all its own applications, it is also running the hypervisor which in turn is managing three Linux operating systems.

Containers

Understanding the power of containers and the microservices architecture requires a change of mindset. With VMs we think in terms of complete servers. They have certainly achieved their objectives in terms of making the most efficient use of the server, are flexible and scalable, but we still view them as physical machines. With containers, we move our thinking from one of a machine to that of a function or service. Instead of thinking about a program running on an operating system which then resides on a virtual server platform, we instead think that this “box” will perform a particular task for us.

Thinking about functions helps us understand the levels of hardware abstraction that the microservice architectures take advantage of. By creating this abstraction, we remove the direct link between service and hardware, and this is the essence of microservice architectures.

Containers encapsulate the application program and its underlying operating system dependencies. This makes it both lightweight and highly scalable. It’s lightweight because we don’t need to create an entire operating system for each user application, and it’s highly scalable as we can move the user application to any hardware platform we like. This includes on-prem and off-prem virtualization and public cloud. Containers share the same server operating system but function independently.

Diagram 3 shows how VMs and containers interact with the operating system. VMs operate as stand-alone servers that carry a lot of overhead, but containers share one operating system and use the dependent libraries they need. For example, a transcoder running in a server farm will probably need one VM for each instance, whereas a containerized transcoder needs just a container resource, which may use one or part of a server.

It’s important to note that the container isn’t magic, it still imposes hardware constraints, but these are more easily configured and have many more available options for resource utilization.

Diagram 3 – left, shows a type-1 VM with hypervisor separating the hardware from the software. Right, shows a container system with lightweight microservices to maintain greater flexibility and scalability.

Diagram 3 – left, shows a type-1 VM with hypervisor separating the hardware from the software. Right, shows a container system with lightweight microservices to maintain greater flexibility and scalability.

A transcoder container operates within the node environment, and the node specifies the server and the required resource it is running on. As well as being a standalone server, the container could be operated in a public-cloud virtualized server. But the public cloud provider may supply a container engine service so that the transcoder can be moved to this resource and operated directly from there.

Communications between microservices are facilitated by the messaging plane which forms part of the microservice architecture. It is possible to replicate this if using VMs instead of microservice architectures but would require a custom solution which may not be completely open, thus making it difficult to maintain and monitor, especially if multiple vendors were involved. Although this is achievable, the built-in messaging system within the microservice architecture allows microservices running anywhere in the world to communicate with each other as required. There are obvious security implications for this, and part of the architecture is to make sure only the microservice programs that should communicate with each other do. Implementing this in a VM-only environment would again require custom solutions.

This is also the case with the user API. It is possible to build APIs for VM solutions and this is an accepted methodology. However, the APIs tend to be vendor-specific and don’t always follow a prescribed format. APIs operating within the microservices architecture comply with the container protocols so that well defined and backwards compatible APIs are built into the system at the beginning of the design and maintained throughout their life cycles.

In essence, the container operates as an independent entity which can be moved between virtualized machines, cloud, or physical hardware resource. Furthermore, as containers are much smaller in their memory and storage requirements, they can be easily loaded to many types of public cloud providers or private datacenters. It’s this ability to operate the container, with its encapsulated microservice, within many different types of platforms that delivers the much-lauded flexibility, scalability, and resilience.

Part of a series supported by

You might also like...

Video Quality: Part 1 - Video Quality Faces New Challenges In Generative AI Era

In this first in a new series about Video Quality, we look at how the continuing proliferation of User Generated Content has brought new challenges for video quality assurance, with AI in turn helping address some of them. But new…

Minimizing OTT Churn Rates Through Viewer Engagement

A D2C streaming service requires an understanding of satisfaction with the service – the quality of it, the ease of use, the style of use – which requires the right technology and a focused information-gathering approach.

Production Control Room Tools At NAB 2024

As we approach the 2024 NAB Show we discuss the increasing demands placed on production control rooms and their crew, and the technologies coming to market in this key area of live broadcast production.

Designing IP Broadcast Systems: Where Broadcast Meets IT

Broadcast and IT engineers have historically approached their professions from two different places, but as technology is more reliable, they are moving closer.

Network Orchestration And Monitoring At NAB 2024

Sophisticated IP infrastructure requires software layers to facilitate network & infrastructure planning, orchestration, and monitoring and there will be plenty in this area to see at the 2024 NAB Show.