Reality of IP - Part 2 - Benefits of Virtualization

Riding on the back of IT innovation allows broadcasters to benefit from virtualization. In this article, we investigate those benefits and learn how they apply to television. Especially as we learn of the new trailblazers waiting in the wings.

Telephone network operators were suffering the same challenges found in many current day broadcast facilities. Proprietary hardware equipment was prevalent and even increasing as new technologies were developed. Space and power were at a premium and finding places to install hardware was proving more and more difficult.

Increasing energy costs, the need for more capital investment, and a skills shortage to design, install, and maintain complex hardware systems was proving unsustainable. And hardware systems quickly reach their end of life, especially when new technologies are constantly developed. This leads to very short design, install, and operate life cycles, resulting in poor return on investment for operators.

Virtualization to the Rescue

Virtualization addresses these problems by leveraging standard topologies found in IT systems. Building on the economies of scale gained by hardware manufacturers during their research and development cycles, broadcast service suppliers can achieve significantly reduced equipment costs and reduced power consumption.

Before virtualization was available, IT hardware was either under or over utilized. But virtualization enables more efficient use of IT infrastructure as it allows services to be moved seamlessly between resource and balanced with more efficiency to make better use of the available hardware.

Hardware design life-cycles can last many months and even years. But the economies of scale needed to design and manufacture hardware, such as computer servers and Ethernet switchers, is no longer applicable for software-based development. As data speeds increase, hardware manufacturers must purchase faster and faster test equipment, and continually invest in smaller skillset pools.

As device speeds increase, the cost of associated test equipment costs increases exponentially, and the salaries of the engineers designing the equipment increases as their skillset becomes more and more specialized. Thus, making the viability of high-speed hardware design more difficult for small and medium businesses.

Diagram 1 – Three physical servers, each running multiple copies of different operating systems and applications via the hypervisor virtualized management software.

Diagram 1 – Three physical servers, each running multiple copies of different operating systems and applications via the hypervisor virtualized management software.

In virtualized architectures, software vendors can design, build, and test their products on the same infrastructure. And with cloud systems, they can even employ follow-the-sun development strategies to program and release code 24 hours a day.

Innovators will Win

Existing broadcast service vendors may be looking at the developing virtualization platforms with some concern. But this doesn’t have to be the case as many vendors already specialize in software development. In many cases, the hardware they designed was necessary to sell the service the software was providing. Instead, a standardized hardware platform could be an opportunity. As in many cases, their real value lies in the service their software is providing or problem it is solving.

Cloud infrastructures are deployed throughout the world with high-speed connectivity between data centers enabling localized processing to be brought closer to the viewer. This reduces latency considerably and improves the viewer experience.

Virtualization mechanisms are at the heart of cloud computing. Hardware virtualization is the process of creating a virtual machine that acts like a real computer with its own operating system. The host machine is the actual physical server running the virtualization system and the guest machine is the virtual instance. Many virtual instances with differing operating systems can run on a single host device.

Hypervisor and VMM

The software that provides the virtualization and creates the instances is called the hypervisor or Virtual Machine Manager (VMM). Both the VMM and hypervisor sit between the physical server hardware and the virtual instances.

Although virtualization was available in the earliest computers from 1967, it was not implemented in the first x86 architecture but did see a resurgence from around 2005. This was due to the improved hardware capabilities of servers, striving for greater cost efficiencies, better control of server render farms, and improved security and reliability through hypervisor architectures.

To help improve virtualization, Central Processing Units (CPU’s), such as the Intel Xeon series, are built with support for virtualization directly into the silicon, thus offloading some of the hypervisor code from the CPU and providing a form of hardware acceleration.

Diagram 2 – HPE ProLiant DL360 server supporting Intels Xeon processor with virtualization hardware support.

Diagram 2 – HPE ProLiant DL360 server supporting Intels Xeon processor with virtualization hardware support.

Cloud infrastructures provided systems to facilitate automated initialization of instances and control of the devices. Monitoring software, also running in the cloud, establishes if more resource is needed and will spin-up new instances as required. When the work-load reduces, the instances will be switched off and deleted.

Few businesses have the privilege of building green-field systems and will instead develop organically. Cloud infrastructures exist on-prem and off-prem, they may be private or public. A broadcaster moving to the cloud may find they have multiple systems from different service suppliers and hardware manufacturers.

Integrated Control

Such systems can soon escalate in complexity and will become difficult to administer and control. Again, looking to the IT industry, there are many solutions. For example, HPE OneSphere brings together all the diverse cloud systems a broadcaster may be employing under one application, simplifying configuration and monitoring.

Understanding where servers and services are being deployed is critical when keeping control of the costs, especially with public cloud systems. As well as simplifying administration, a complete monitoring solution will provide increased granularity so that costs for projects, services and instances can all be gathered in real time. Preconfigured alarms help administrators keep control of costs and identify potential issues quickly.

Self-service is a relatively new concept and it relies on users being empowered to solve their own problems. By setting limits and providing preselected applications, IT administrators enable users to build and configure their own servers and launch applications. This is incredibly powerful for developers who want the flexibility to work on their own servers whilst knowing they are working in a safe environment where they cannot do damage, especially if a program under development develops a system critical bug.

Complete Resilience

Snapshots are copies of a virtualized instance and can be used as backups or as part of a disaster recovery system. The snapshot can be used later to create duplicate instances all the same. This is useful as the system administrator doesn’t have to install an operating system from scratch when an instance is launched. Instead, they use the snapshot resulting in a faster launch time, and the administrators know the instance works to a certain level.

Rip-and-replace takes advantage of virtualization. If a service running on a server instance is problematic, for example a software upgrade may have failed, then the instance is simply deleted from the virtualization control console, and another instance created in its place. A snapshot instance can also be used to improve spin-up times.

Software Defined Infrastructures (SDI) describe the collective system virtualized servers, storage and networks. Using SDI, broadcasters can focus more on their operation and keep control of costs, and service providers can spend more time on delivering services to solve problems, allowing them to focus on their core skillset, instead of fighting the hardware.

Diagram 3 – With a fully SDI data center, servers will be virtualized, storage will be software based and the network will be software defined, giving the most optimized and flexible system possible.

Diagram 3 – With a fully SDI data center, servers will be virtualized, storage will be software based and the network will be software defined, giving the most optimized and flexible system possible.

Innovation has improved virtualized and cloud computing leaps and bounds over recent years to the point where it is now the de facto system in most IT installations. Public, private, on-prem, and off-prem systems further increase the number of options available to broadcasters and their vendors. Delivering systems that have unprecedented flexibility and speed.

Recent advances in input-output processing technology have further improved the prospect of virtualization for broadcasting. And in the next article in this series, we investigate the technologies needed to make super-high-speed, low-latency, ethernet data transfer and processing in SDI viable for broadcasters.

Part of a series supported by

You might also like...

Minimizing OTT Churn Rates Through Viewer Engagement

A D2C streaming service requires an understanding of satisfaction with the service – the quality of it, the ease of use, the style of use – which requires the right technology and a focused information-gathering approach.

Designing IP Broadcast Systems: Where Broadcast Meets IT

Broadcast and IT engineers have historically approached their professions from two different places, but as technology is more reliable, they are moving closer.

NAB Show 2024 BEIT Sessions Part 2: New Broadcast Technologies

The most tightly focused and fresh technical information for TV engineers at the NAB Show will be analyzed, discussed, and explained during the four days of BEIT sessions. It’s the best opportunity on Earth to learn from and question i…

Standards: Part 6 - About The ISO 14496 – MPEG-4 Standard

This article describes the various parts of the MPEG-4 standard and discusses how it is much more than a video codec. MPEG-4 describes a sophisticated interactive multimedia platform for deployment on digital TV and the Internet.

The Big Guide To OTT: Part 9 - Quality Of Experience (QoE)

Part 9 of The Big Guide To OTT features a pair of in-depth articles which discuss how a data driven understanding of the consumer experience is vital and how poor quality streaming loses viewers.