Viktor Farcic led a 'show and tell' workshop at the EBU’s annual Devcon conference
Broadcasters are reverting to being engineering driven after some years operating as little more than content houses, but this time the focus is more on software than infrastructure. That conclusion emerged from the EBU’s (European Broadcasting Union) fourth annual software engineering conference called Devcon, which started in 2013 in recognition that the industry was becoming more IT focused.
This year more than ever before it was clear broadcasting has become more aligned with enterprise IT and is now helping to shape the evolution of distributed computing. In broadcasting as in other sectors there is a growing clamor for IT architectures that support micro services, or continuous development, where apps and features can be constantly evolving and deployed at short notice. This requires some form of software container insulating applications from the surrounding IT infrastructure, including the operating system, underlying hardware platform and network.
At the EBU’s Devcon it was not surprising therefore to witness a strong focus on the Docker software container platform along with the now closely related management platform from Google called Kubernetes. It is true that broadcasters on the whole have remained aloof from the technical debates that have been raging within the software development community over the merits of Docker in particular. That is wise given it is clear now that is where the field is heading and that teething problems will be resolved over time. The mood at Devcon was that Docker is coming and will add significant value to applications and service, particularly on the streaming and OTT fronts.
In essence Devcon represents the latest chapter in the long story of virtualization and distributed computing that has been running almost half a century since IBM introduced the concept for its mainframes, separating applications from the hardware they run on to introduce a degree of software portability.
Docker emerged in 2013 as an open source project motivated by the desire to take virtualization a step further by avoiding need for a guest operating system to run. The aim was to make virtualization lighter in terms of resources and also apps easier to install from the command line, rather like in the mobile world. Before Docker, virtualization was usually associated with a layer of software called the Hypervisor running on top of a given server’s host operating system, essentially presenting the hardware as a clean slate for deployment of a guest operating system. This provided the necessary separation between application software and hardware for distributed services to be run on commodity platforms, to reduce costs and make best use of available resources. But it meant the virtual machines built from commodity hardware comprised not just the application software but also the entire guest operating system along with other supporting software tools, often consuming tens of GBs of storage per server, while also retarding performance.
Docker reduces storage requirements by replacing the Hypervisor with a new layer called the Docker Engine, which is the container for application software, delivering all the resources needed to run on the given machine, sharing the same host operating system. This reduces need for RAM as well as disk storage, with the net result of speeding up execution.
At least that is the theory, but inevitably the Docker Engine is itself a complex piece of software and larger than the Hypervisor it replaces. That offsets some of the benefits in reduced overhead achieved by cutting out the guest operating system. There have also been complaints that Docker can hardly be called open when it only works on servers running either a major version of Linux – admittedly open source – and also Microsoft Windows.
Security is another bone of contention. Advocates argue that the Docker Engine strengthens security because software containers isolate applications from one another and from the underlying infrastructure, while providing an added layer of protection for the application. But critics point out that Docker presents a new surface for attack that needs to be addressed, while amplifying the potential impact of any vulnerabilities present in the host operating system kernel. There is no longer the protection provided by the guest operating system and hypervisor, placing more responsibility for security on the host operating system.
But broadcasters should just let these issues be played out within the Docker community. The bigger picture is that the platform has gained almost universal support from key players such as Google, Microsoft and the whole open source community.
What is true though is that realizing the dreams of virtualization and distributed computing is an ongoing challenge which, having taken 50 years, is not about to be solved at a stroke. The Docker chapter is unlikely to be the last in the saga.
Such sentiments were to an extent in evidence at the EBU’s Devcon, with recognition that Docker is not a panacea for all the pains of software deployment in the microservices era. Broadcasters, like all enterprises, will continue to require highly skilled development people and Docker does not avoid the need for well-designed software. In fact microservices in general increases the requirement for software built for scalability and also skills in software testing, given increased exposure to bugs that might previously have had a more local impact.
The mood of optimism tempered by the challenges was captured at Devcon by Viktor Farcic, a member of the so called Docker Captains group acting as technical evangelists for the platform. "It is not just about lighter virtual machines,” said Farcic. “It is a completely new way of thinking about how to ship applications in terms of network, storage and computation.” Farcic led a 'show and tell' workshop demonstrating how to build, test and deploy services with Docker.
You might also like...
The arcane world of metadata has been enlivened by automation with the promise of efficiency savings in asset management and much richer labelling of content to enhance discovery. At the same time, there are hopes at last of the field…
RDK (Reference Design Kit) is set for its next phase helping cable operators migrate to all-IP combined video and broadband services by embracing wireless delivery for the final hop to the user and enabling integration with Android.
Immersive TV is emerging from the vapors of multiple technologies to make viewing more engaging for users and more profitable for providers of content or ads. Its big advantage is that it is not just one aspect of viewing like “m…
Innovation has become a mantra for broadcasters, driven in part by the disruption of online content consumption and proliferation of video content sources which now number 1 billion globally by some counts. Innovation is seen as crucial for the very long…
KVM is more important now for broadcast-IP systems than it ever has been. As manufacturers turn to server based architectures private cloud installations have become more mainstream, requiring us to configure systems through traditional server control inputs, that is keyboard,…