Riding on the back of IT innovation allows broadcasters to benefit from virtualization. In this article, we investigate those benefits and learn how they apply to television. Especially as we learn of the new trailblazers waiting in the wings.
Telephone network operators were suffering the same challenges found in many current day broadcast facilities. Proprietary hardware equipment was prevalent and even increasing as new technologies were developed. Space and power were at a premium and finding places to install hardware was proving more and more difficult.
Increasing energy costs, the need for more capital investment, and a skills shortage to design, install, and maintain complex hardware systems was proving unsustainable. And hardware systems quickly reach their end of life, especially when new technologies are constantly developed. This leads to very short design, install, and operate life cycles, resulting in poor return on investment for operators.
Virtualization to the Rescue
Virtualization addresses these problems by leveraging standard topologies found in IT systems. Building on the economies of scale gained by hardware manufacturers during their research and development cycles, broadcast service suppliers can achieve significantly reduced equipment costs and reduced power consumption.
Before virtualization was available, IT hardware was either under or over utilized. But virtualization enables more efficient use of IT infrastructure as it allows services to be moved seamlessly between resource and balanced with more efficiency to make better use of the available hardware.
Hardware design life-cycles can last many months and even years. But the economies of scale needed to design and manufacture hardware, such as computer servers and Ethernet switchers, is no longer applicable for software-based development. As data speeds increase, hardware manufacturers must purchase faster and faster test equipment, and continually invest in smaller skillset pools.
As device speeds increase, the cost of associated test equipment costs increases exponentially, and the salaries of the engineers designing the equipment increases as their skillset becomes more and more specialized. Thus, making the viability of high-speed hardware design more difficult for small and medium businesses.
Diagram 1 – Three physical servers, each running multiple copies of different operating systems and applications via the hypervisor virtualized management software.
In virtualized architectures, software vendors can design, build, and test their products on the same infrastructure. And with cloud systems, they can even employ follow-the-sun development strategies to program and release code 24 hours a day.
Innovators will Win
Existing broadcast service vendors may be looking at the developing virtualization platforms with some concern. But this doesn’t have to be the case as many vendors already specialize in software development. In many cases, the hardware they designed was necessary to sell the service the software was providing. Instead, a standardized hardware platform could be an opportunity. As in many cases, their real value lies in the service their software is providing or problem it is solving.
Cloud infrastructures are deployed throughout the world with high-speed connectivity between data centers enabling localized processing to be brought closer to the viewer. This reduces latency considerably and improves the viewer experience.
Virtualization mechanisms are at the heart of cloud computing. Hardware virtualization is the process of creating a virtual machine that acts like a real computer with its own operating system. The host machine is the actual physical server running the virtualization system and the guest machine is the virtual instance. Many virtual instances with differing operating systems can run on a single host device.
Hypervisor and VMM
The software that provides the virtualization and creates the instances is called the hypervisor or Virtual Machine Manager (VMM). Both the VMM and hypervisor sit between the physical server hardware and the virtual instances.
Although virtualization was available in the earliest computers from 1967, it was not implemented in the first x86 architecture but did see a resurgence from around 2005. This was due to the improved hardware capabilities of servers, striving for greater cost efficiencies, better control of server render farms, and improved security and reliability through hypervisor architectures.
To help improve virtualization, Central Processing Units (CPU’s), such as the Intel Xeon series, are built with support for virtualization directly into the silicon, thus offloading some of the hypervisor code from the CPU and providing a form of hardware acceleration.
Diagram 2 – HPE ProLiant DL360 server supporting Intels Xeon processor with virtualization hardware support.
Cloud infrastructures provided systems to facilitate automated initialization of instances and control of the devices. Monitoring software, also running in the cloud, establishes if more resource is needed and will spin-up new instances as required. When the work-load reduces, the instances will be switched off and deleted.
Few businesses have the privilege of building green-field systems and will instead develop organically. Cloud infrastructures exist on-prem and off-prem, they may be private or public. A broadcaster moving to the cloud may find they have multiple systems from different service suppliers and hardware manufacturers.
Such systems can soon escalate in complexity and will become difficult to administer and control. Again, looking to the IT industry, there are many solutions. For example, HPE OneSphere brings together all the diverse cloud systems a broadcaster may be employing under one application, simplifying configuration and monitoring.
Understanding where servers and services are being deployed is critical when keeping control of the costs, especially with public cloud systems. As well as simplifying administration, a complete monitoring solution will provide increased granularity so that costs for projects, services and instances can all be gathered in real time. Preconfigured alarms help administrators keep control of costs and identify potential issues quickly.
Self-service is a relatively new concept and it relies on users being empowered to solve their own problems. By setting limits and providing preselected applications, IT administrators enable users to build and configure their own servers and launch applications. This is incredibly powerful for developers who want the flexibility to work on their own servers whilst knowing they are working in a safe environment where they cannot do damage, especially if a program under development develops a system critical bug.
Snapshots are copies of a virtualized instance and can be used as backups or as part of a disaster recovery system. The snapshot can be used later to create duplicate instances all the same. This is useful as the system administrator doesn’t have to install an operating system from scratch when an instance is launched. Instead, they use the snapshot resulting in a faster launch time, and the administrators know the instance works to a certain level.
Rip-and-replace takes advantage of virtualization. If a service running on a server instance is problematic, for example a software upgrade may have failed, then the instance is simply deleted from the virtualization control console, and another instance created in its place. A snapshot instance can also be used to improve spin-up times.
Software Defined Infrastructures (SDI) describe the collective system virtualized servers, storage and networks. Using SDI, broadcasters can focus more on their operation and keep control of costs, and service providers can spend more time on delivering services to solve problems, allowing them to focus on their core skillset, instead of fighting the hardware.
Diagram 3 – With a fully SDI data center, servers will be virtualized, storage will be software based and the network will be software defined, giving the most optimized and flexible system possible.
Innovation has improved virtualized and cloud computing leaps and bounds over recent years to the point where it is now the de facto system in most IT installations. Public, private, on-prem, and off-prem systems further increase the number of options available to broadcasters and their vendors. Delivering systems that have unprecedented flexibility and speed.
Recent advances in input-output processing technology have further improved the prospect of virtualization for broadcasting. And in the next article in this series, we investigate the technologies needed to make super-high-speed, low-latency, ethernet data transfer and processing in SDI viable for broadcasters.
You might also like...
OTT distribution is worlds apart from traditional unidirectional broadcasting in terms of its fundamental operation and viewing preferences. The internet is a rapidly expanding collection of service providers, many in direct competition, transferring broadcaster video and audio streams alongside many…
In the last two articles in this series we looked at why we need to monitor in OTT. Then, through analysing a typical OTT distribution chain, we sought to understand where the technical points of demarcation and challenges arise. In…
In the previous article in this series, “Understanding OTT Systems”, we looked at the fundamental differences between unidirectional broadcast and OTT delivery. We investigated the complexity of OTT delivery and observed an insight into the multi-service provider silo culture. In thi…
Moving to IP opens a whole plethora of options for broadcasters. Engineers often speak of the advantages of scalability and flexibility in IP systems. But IP systems take on many flavors, from on-prem to off-prem, private and public cloud. And…
In this series of articles, we investigate OTT distribution networks to better understand the unique challenges ahead and how to solve them. Unlike traditional RF broadcast and cable platform delivery networks, OTT comprises of many systems operated by different companies…