Broadcasters Become More Software Driven to Compete in Multiscreen Era

Broadcasters are reverting to being engineering driven after some years operating as little more than content houses, but this time the focus is more on software than infrastructure. That conclusion emerged from the EBU’s (European Broadcasting Union) fourth annual software engineering conference called Devcon, which started in 2013 in recognition that the industry was becoming more IT focused.

This year more than ever before it was clear broadcasting has become more aligned with enterprise IT and is now helping to shape the evolution of distributed computing. In broadcasting as in other sectors there is a growing clamor for IT architectures that support micro services, or continuous development, where apps and features can be constantly evolving and deployed at short notice. This requires some form of software container insulating applications from the surrounding IT infrastructure, including the operating system, underlying hardware platform and network.

At the EBU’s Devcon it was not surprising therefore to witness a strong focus on the Docker software container platform along with the now closely related management platform from Google called Kubernetes. It is true that broadcasters on the whole have remained aloof from the technical debates that have been raging within the software development community over the merits of Docker in particular. That is wise given it is clear now that is where the field is heading and that teething problems will be resolved over time. The mood at Devcon was that Docker is coming and will add significant value to applications and service, particularly on the streaming and OTT fronts.

In essence Devcon represents the latest chapter in the long story of virtualization and distributed computing that has been running almost half a century since IBM introduced the concept for its mainframes, separating applications from the hardware they run on to introduce a degree of software portability.

Docker emerged in 2013 as an open source project motivated by the desire to take virtualization a step further by avoiding need for a guest operating system to run. The aim was to make virtualization lighter in terms of resources and also apps easier to install from the command line, rather like in the mobile world. Before Docker, virtualization was usually associated with a layer of software called the Hypervisor running on top of a given server’s host operating system, essentially presenting the hardware as a clean slate for deployment of a guest operating system. This provided the necessary separation between application software and hardware for distributed services to be run on commodity platforms, to reduce costs and make best use of available resources. But it meant the virtual machines built from commodity hardware comprised not just the application software but also the entire guest operating system along with other supporting software tools, often consuming tens of GBs of storage per server, while also retarding performance.

The Docker architecture avoids need for guest operating system.

The Docker architecture avoids need for guest operating system.

Docker reduces storage requirements by replacing the Hypervisor with a new layer called the Docker Engine, which is the container for application software, delivering all the resources needed to run on the given machine, sharing the same host operating system. This reduces need for RAM as well as disk storage, with the net result of speeding up execution.

At least that is the theory, but inevitably the Docker Engine is itself a complex piece of software and larger than the Hypervisor it replaces. That offsets some of the benefits in reduced overhead achieved by cutting out the guest operating system. There have also been complaints that Docker can hardly be called open when it only works on servers running either a major version of Linux – admittedly open source – and also Microsoft Windows.

Security is another bone of contention. Advocates argue that the Docker Engine strengthens security because software containers isolate applications from one another and from the underlying infrastructure, while providing an added layer of protection for the application. But critics point out that Docker presents a new surface for attack that needs to be addressed, while amplifying the potential impact of any vulnerabilities present in the host operating system kernel. There is no longer the protection provided by the guest operating system and hypervisor, placing more responsibility for security on the host operating system.

But broadcasters should just let these issues be played out within the Docker community. The bigger picture is that the platform has gained almost universal support from key players such as Google, Microsoft and the whole open source community.

What is true though is that realizing the dreams of virtualization and distributed computing is an ongoing challenge which, having taken 50 years, is not about to be solved at a stroke. The Docker chapter is unlikely to be the last in the saga.

Such sentiments were to an extent in evidence at the EBU’s Devcon, with recognition that Docker is not a panacea for all the pains of software deployment in the microservices era. Broadcasters, like all enterprises, will continue to require highly skilled development people and Docker does not avoid the need for well-designed software. In fact microservices in general increases the requirement for software built for scalability and also skills in software testing, given increased exposure to bugs that might previously have had a more local impact.

The mood of optimism tempered by the challenges was captured at Devcon by Viktor Farcic, a member of the so called Docker Captains group acting as technical evangelists for the platform. "It is not just about lighter virtual machines,” said Farcic. “It is a completely new way of thinking about how to ship applications in terms of network, storage and computation.” Farcic led a 'show and tell' workshop demonstrating how to build, test and deploy services with Docker.

Let us know what you think…

Log-in or Register for free to post comments…

You might also like...

How NewsBeat Social Used Video to Find Success

In January 2013, NewsBeat Social was a brand new, international news organization with minimal audience recognition or following. Our company was made up of a green screen, a handful of cameras, a few computers and a Facebook account. Based out of…

Opera Devices Spotlights New HTML5 TV Rendering Engine

At the IBC 2014 Show Opera Devices (stand #14. E20), based in Oslo, Norway, will show its Blink-based embeddable HTML5 rendering engine for Smart TV devices, which now includes new features, a new internet browser , a new SDK (v4.2) and support for…

Media workflow dashboards

In any endeavor there are three types of decision making; operational, tactical and strategic. Dashboards are one effective method of presenting that information.

Xytech Transmission Services Expanded with New Division Launch

Xytech has announced the launch of a Broadcast Services Division to address the rapidly changing needs of global video transmission services. Daniel Lynch, an industry veteran, will lead broadcast services division.

Exset Extends Revenue Options for Operators in Emerging Markets

Dutch TV middleware vendor Exset has announced DMS 4.0, the latest version of its set top box software geared towards emerging markets. The middleware is designed to run on low cost set tops with features enabling additional revenue generating services over…