IBC2018 Show Event Channel

Everything you need to know for the show and exhibitors.

Click here

Broadcasters Become More Software Driven to Compete in Multiscreen Era

Broadcasters are reverting to being engineering driven after some years operating as little more than content houses, but this time the focus is more on software than infrastructure. That conclusion emerged from the EBU’s (European Broadcasting Union) fourth annual software engineering conference called Devcon, which started in 2013 in recognition that the industry was becoming more IT focused.

This year more than ever before it was clear broadcasting has become more aligned with enterprise IT and is now helping to shape the evolution of distributed computing. In broadcasting as in other sectors there is a growing clamor for IT architectures that support micro services, or continuous development, where apps and features can be constantly evolving and deployed at short notice. This requires some form of software container insulating applications from the surrounding IT infrastructure, including the operating system, underlying hardware platform and network.

At the EBU’s Devcon it was not surprising therefore to witness a strong focus on the Docker software container platform along with the now closely related management platform from Google called Kubernetes. It is true that broadcasters on the whole have remained aloof from the technical debates that have been raging within the software development community over the merits of Docker in particular. That is wise given it is clear now that is where the field is heading and that teething problems will be resolved over time. The mood at Devcon was that Docker is coming and will add significant value to applications and service, particularly on the streaming and OTT fronts.

In essence Devcon represents the latest chapter in the long story of virtualization and distributed computing that has been running almost half a century since IBM introduced the concept for its mainframes, separating applications from the hardware they run on to introduce a degree of software portability.

Docker emerged in 2013 as an open source project motivated by the desire to take virtualization a step further by avoiding need for a guest operating system to run. The aim was to make virtualization lighter in terms of resources and also apps easier to install from the command line, rather like in the mobile world. Before Docker, virtualization was usually associated with a layer of software called the Hypervisor running on top of a given server’s host operating system, essentially presenting the hardware as a clean slate for deployment of a guest operating system. This provided the necessary separation between application software and hardware for distributed services to be run on commodity platforms, to reduce costs and make best use of available resources. But it meant the virtual machines built from commodity hardware comprised not just the application software but also the entire guest operating system along with other supporting software tools, often consuming tens of GBs of storage per server, while also retarding performance.

 The Docker architecture avoids need for guest operating system.

The Docker architecture avoids need for guest operating system.

Docker reduces storage requirements by replacing the Hypervisor with a new layer called the Docker Engine, which is the container for application software, delivering all the resources needed to run on the given machine, sharing the same host operating system. This reduces need for RAM as well as disk storage, with the net result of speeding up execution.

At least that is the theory, but inevitably the Docker Engine is itself a complex piece of software and larger than the Hypervisor it replaces. That offsets some of the benefits in reduced overhead achieved by cutting out the guest operating system. There have also been complaints that Docker can hardly be called open when it only works on servers running either a major version of Linux – admittedly open source – and also Microsoft Windows.

Security is another bone of contention. Advocates argue that the Docker Engine strengthens security because software containers isolate applications from one another and from the underlying infrastructure, while providing an added layer of protection for the application. But critics point out that Docker presents a new surface for attack that needs to be addressed, while amplifying the potential impact of any vulnerabilities present in the host operating system kernel. There is no longer the protection provided by the guest operating system and hypervisor, placing more responsibility for security on the host operating system.

But broadcasters should just let these issues be played out within the Docker community. The bigger picture is that the platform has gained almost universal support from key players such as Google, Microsoft and the whole open source community.

What is true though is that realizing the dreams of virtualization and distributed computing is an ongoing challenge which, having taken 50 years, is not about to be solved at a stroke. The Docker chapter is unlikely to be the last in the saga.

Such sentiments were to an extent in evidence at the EBU’s Devcon, with recognition that Docker is not a panacea for all the pains of software deployment in the microservices era. Broadcasters, like all enterprises, will continue to require highly skilled development people and Docker does not avoid the need for well-designed software. In fact microservices in general increases the requirement for software built for scalability and also skills in software testing, given increased exposure to bugs that might previously have had a more local impact.

The mood of optimism tempered by the challenges was captured at Devcon by Viktor Farcic, a member of the so called Docker Captains group acting as technical evangelists for the platform. "It is not just about lighter virtual machines,” said Farcic. “It is a completely new way of thinking about how to ship applications in terms of network, storage and computation.” Farcic led a 'show and tell' workshop demonstrating how to build, test and deploy services with Docker.

Let us know what you think…

Log-in or Register for free to post comments…

You might also like...

Viewpoint: Making The Multiscreen Experience Applicable For Multiple Customers

There is an unprecedented transformation occurring in the TV platform, from a rigid, linear TV experience to one of flexible fluidity in the OTT and multiscreen worlds. More than half of today’s TV viewers say they now watch their f…

Google Leaves RDK Trailing with Android TV Operator Tier

Android TV is finally being adopted on a large scale by pay TV operators three years after its launch and seven years on from the original unveiling of its predecessor Google TV. One casualty could be the RDK (Reference Design…

Com Hem Claims World First Major Android TV Operator Tier Deployment with Help from 3SS

Com Hem, Sweden’s largest cable operator, has revealed that Germany’s 3 Screen Solutions (3SS) played a key role in project development and systems integration for its TV Hub, a hybrid set top box (STB) based on Android TV. 3SS als…

Metadata Grinds Towards Unification and Automation

The arcane world of metadata has been enlivened by automation with the promise of efficiency savings in asset management and much richer labelling of content to enhance discovery. At the same time, there are hopes at last of the field…

RDK Ready to Help Cable Operators Accelerate IP Migration

RDK (Reference Design Kit) is set for its next phase helping cable operators migrate to all-IP combined video and broadband services by embracing wireless delivery for the final hop to the user and enabling integration with Android.