Broadcasters Become More Software Driven to Compete in Multiscreen Era

Broadcasters are reverting to being engineering driven after some years operating as little more than content houses, but this time the focus is more on software than infrastructure. That conclusion emerged from the EBU’s (European Broadcasting Union) fourth annual software engineering conference called Devcon, which started in 2013 in recognition that the industry was becoming more IT focused.

This year more than ever before it was clear broadcasting has become more aligned with enterprise IT and is now helping to shape the evolution of distributed computing. In broadcasting as in other sectors there is a growing clamor for IT architectures that support micro services, or continuous development, where apps and features can be constantly evolving and deployed at short notice. This requires some form of software container insulating applications from the surrounding IT infrastructure, including the operating system, underlying hardware platform and network.

At the EBU’s Devcon it was not surprising therefore to witness a strong focus on the Docker software container platform along with the now closely related management platform from Google called Kubernetes. It is true that broadcasters on the whole have remained aloof from the technical debates that have been raging within the software development community over the merits of Docker in particular. That is wise given it is clear now that is where the field is heading and that teething problems will be resolved over time. The mood at Devcon was that Docker is coming and will add significant value to applications and service, particularly on the streaming and OTT fronts.

In essence Devcon represents the latest chapter in the long story of virtualization and distributed computing that has been running almost half a century since IBM introduced the concept for its mainframes, separating applications from the hardware they run on to introduce a degree of software portability.

Docker emerged in 2013 as an open source project motivated by the desire to take virtualization a step further by avoiding need for a guest operating system to run. The aim was to make virtualization lighter in terms of resources and also apps easier to install from the command line, rather like in the mobile world. Before Docker, virtualization was usually associated with a layer of software called the Hypervisor running on top of a given server’s host operating system, essentially presenting the hardware as a clean slate for deployment of a guest operating system. This provided the necessary separation between application software and hardware for distributed services to be run on commodity platforms, to reduce costs and make best use of available resources. But it meant the virtual machines built from commodity hardware comprised not just the application software but also the entire guest operating system along with other supporting software tools, often consuming tens of GBs of storage per server, while also retarding performance.

 The Docker architecture avoids need for guest operating system.

The Docker architecture avoids need for guest operating system.

Docker reduces storage requirements by replacing the Hypervisor with a new layer called the Docker Engine, which is the container for application software, delivering all the resources needed to run on the given machine, sharing the same host operating system. This reduces need for RAM as well as disk storage, with the net result of speeding up execution.

At least that is the theory, but inevitably the Docker Engine is itself a complex piece of software and larger than the Hypervisor it replaces. That offsets some of the benefits in reduced overhead achieved by cutting out the guest operating system. There have also been complaints that Docker can hardly be called open when it only works on servers running either a major version of Linux – admittedly open source – and also Microsoft Windows.

Security is another bone of contention. Advocates argue that the Docker Engine strengthens security because software containers isolate applications from one another and from the underlying infrastructure, while providing an added layer of protection for the application. But critics point out that Docker presents a new surface for attack that needs to be addressed, while amplifying the potential impact of any vulnerabilities present in the host operating system kernel. There is no longer the protection provided by the guest operating system and hypervisor, placing more responsibility for security on the host operating system.

But broadcasters should just let these issues be played out within the Docker community. The bigger picture is that the platform has gained almost universal support from key players such as Google, Microsoft and the whole open source community.

What is true though is that realizing the dreams of virtualization and distributed computing is an ongoing challenge which, having taken 50 years, is not about to be solved at a stroke. The Docker chapter is unlikely to be the last in the saga.

Such sentiments were to an extent in evidence at the EBU’s Devcon, with recognition that Docker is not a panacea for all the pains of software deployment in the microservices era. Broadcasters, like all enterprises, will continue to require highly skilled development people and Docker does not avoid the need for well-designed software. In fact microservices in general increases the requirement for software built for scalability and also skills in software testing, given increased exposure to bugs that might previously have had a more local impact.

The mood of optimism tempered by the challenges was captured at Devcon by Viktor Farcic, a member of the so called Docker Captains group acting as technical evangelists for the platform. "It is not just about lighter virtual machines,” said Farcic. “It is a completely new way of thinking about how to ship applications in terms of network, storage and computation.” Farcic led a 'show and tell' workshop demonstrating how to build, test and deploy services with Docker.

You might also like...

Minimizing OTT Churn Rates Through Viewer Engagement

A D2C streaming service requires an understanding of satisfaction with the service – the quality of it, the ease of use, the style of use – which requires the right technology and a focused information-gathering approach.

Designing IP Broadcast Systems: Where Broadcast Meets IT

Broadcast and IT engineers have historically approached their professions from two different places, but as technology is more reliable, they are moving closer.

Network Orchestration And Monitoring At NAB 2024

Sophisticated IP infrastructure requires software layers to facilitate network & infrastructure planning, orchestration, and monitoring and there will be plenty in this area to see at the 2024 NAB Show.

Encoding & Transport For Remote Contribution At NAB 2024

As broadcasters embrace remote production workflows the technology required to compress, encode and reliably transport streams from the venue to the network operation center or the cloud become key, and there will be plenty of new developments and sources of…

Standards: Part 7 - ST 2110 - A Review Of The Current Standard

Of all of the broadcast standards it is perhaps SMPTE ST 2110 which has had the greatest impact on production & distribution infrastructure in recent years, but much has changed since it’s 2017 release.