Encoding in the Cloud

The cloud is one of the hot topics in the world of broadcast and media at the moment. Every vendor, it seems, is keen to offer a cloud solution, even if some are not always clear what it means, and where the benefits lie.

Rémi Beaudouin is VP marketing at ATEME

Rémi Beaudouin is VP marketing at ATEME

What is the cloud?

Gartner defines the cloud as a means of computing which is scalable and elastic. IBM says it meets three key points: the elasticity to scale up and down on demand; a metered service so you pay for use; and the ability to self-provision rather than relying on others to set up services for you.

It is important to remember that the cloud is not important by itself. It is a part of a much wider technological transformation in the media industry, and it is an important element in achieving the promised new efficiencies.

Until very recently, to handle and process audio and video required bespoke hardware because that was the only way to create sufficient power to deliver flawless realtime performance. It is only in the last couple of years or so that off-the-shelf computers have reached sufficient processing power to be able to keep up with the demand that broadcasting places upon it.

Moving to IT processing means moving to IT connectivity, so the industry is moving rapidly away from point to point links using SDI and other specialist formats towards IP networks. It promises to bring real economies into the industry.

In part this is because commodity IT hardware is now very inexpensive, thanks to the massive R&D budgets of the market leaders. But there is a more subtle way in which IP connectivity and commodity hardware brings about a step change in cost.


When using bespoke hardware, each device had a single function. But if we move to processes running in software, on standardised hardware, then we do not need a box for each operation, we simply need enough processors for peak demand, starting and stopping software processes as we need them.

This is virtualisation. The application is separated from the hardware: each function appears as a virtual machine, running on common, shared hardware, with some overseeing orchestration layer allocating processors as required.

The orchestration layer – the hypervisor – allocates memory, storage, processing cycles and connectivity as demanded by the application. It can even overlay virtual operating systems if that makes the application layer more efficient.

Virtualisation, then, offers reduced CAPEX because you do not need a machine per function, you simply need to be able to support sufficient virtual machines for the busiest times. That, in turn, means you can create a very flexible system architecture, because you are not connecting machine to machine, but simply linking a series of software processes. New workflows, and even completely new functionality, is created simply by defining, in software, which processes need to be applied to content, and allowing the orchestration layer to create the virtual machines as it needs it.


The principles of virtualisation can be applied at a number of levels. You could transform the broadcast machine room into a server farm, or you could integrate the media functionality into the enterprise-level IT infrastructure. Processors could be running HR or business process management at one moment, encoding or transcoding the next.

This could prove very effective, for example, if you have a lot of non-realtime encoding jobs, which could be batched to run overnight when IT applications are lightly loaded. Keeping the processing on premises has the additional benefit of keeping content under your roof: many media companies are still nervous around intellectual property control.

Increasingly, though, businesses are turning to true cloud operations, whereby a service provider takes on the responsibility for providing storage and processing. This might be a third-party company providing specialist services, or it might be a business specialising in the cloud, such as Amazon Web Services (AWS).

The business model here is that you pay for storage space and for processing minutes, so you move from the traditional broadcast CAPEX spend to a largely OPEX account. That has tremendous implications: you pay for what you need, so you have a direct link between a service and the cost of providing it, making decisions around monetisation and commercial viability simple and transparent.

The relationship between media company and cloud provider will be defined by a service level agreement. It will define performance and availability of content and processing. It should also define the level of security which will be imposed around your content, which you should expect to be very high. For reassurance, remember that the American CIA uses AWS for its data!


Encoding is a classic cloud application. Delivery of a new series of dramas will set a big demand for transcoding to multiple formats. The elasticity of a well-provisioned cloud service will mean that a large number of processor cores will be dedicated to the task, then released to other cloud clients when done.

The result of this hardware abstraction is operational simplicity and very high performance – higher than you could achieve in a traditional architecture without uneconomic levels of CAPEX.

The cloud is inherently secure and resilient, with data protection and redundancy defined in the SLA. When new versions of software come along, or you want to add new services, the cloud provider will undertake the necessary sandbox testing and verification, allowing you to add them quickly and with full confidence. New processes can be integrated quickly through the use of software-defined architectures.

The need to move large files around is currently providing some blocks to cloud migration, but with the use of file transfer acceleration and the wider availability of high-speed data circuits this limitation will fade away. The result will be that the cloud provides a solution that is attractive to the finance director because of the move from CAPEX to OPEX, and because of the direct correlation between a service and the cost of providing it; is attractive to the operational engineers because its elasticity gives a highly responsive solution; and is attractive to the audience because content will be available to them, on their preferred platform, faster and in higher quality.

You might also like...

Audio For Broadcast: Cloud Based Audio

With several industry leading audio vendors demonstrating milestone product releases based on new technology at the 2024 NAB Show, the evolution of cloud-based audio took a significant step forward. In light of these developments the article below replaces previously published content…

Future Technologies: New Hardware Paradigms

As we continue our series of articles considering technologies of the near future and how they might transform how we think about broadcast, we consider the potential processing paradigm shift offered by GPU based processing.

Standards: Part 10 - Embedding And Multiplexing Streams

Audio visual content is constructed with several different media types. Simplest of all would be a single video and audio stream synchronized together. Additional complexity is commonplace. This requires careful synchronization with accurate timing control.

Designing IP Broadcast Systems: Why Can’t We Just Plug And Play?

Plug and play would be an ideal solution for IP broadcast workflows, however, this concept is not as straightforward as it may first seem.

Future Technologies: Private 5G Vs Managed RF

We continue our series considering technologies of the near future and how they might transform how we think about broadcast, with whether building your own private 5G network could be an excellent replacement for managed RF.