Encoding in the Cloud

The cloud is one of the hot topics in the world of broadcast and media at the moment. Every vendor, it seems, is keen to offer a cloud solution, even if some are not always clear what it means, and where the benefits lie.

Rémi Beaudouin is VP marketing at ATEME

Rémi Beaudouin is VP marketing at ATEME

What is the cloud?

Gartner defines the cloud as a means of computing which is scalable and elastic. IBM says it meets three key points: the elasticity to scale up and down on demand; a metered service so you pay for use; and the ability to self-provision rather than relying on others to set up services for you.

It is important to remember that the cloud is not important by itself. It is a part of a much wider technological transformation in the media industry, and it is an important element in achieving the promised new efficiencies.

Until very recently, to handle and process audio and video required bespoke hardware because that was the only way to create sufficient power to deliver flawless realtime performance. It is only in the last couple of years or so that off-the-shelf computers have reached sufficient processing power to be able to keep up with the demand that broadcasting places upon it.

Moving to IT processing means moving to IT connectivity, so the industry is moving rapidly away from point to point links using SDI and other specialist formats towards IP networks. It promises to bring real economies into the industry.

In part this is because commodity IT hardware is now very inexpensive, thanks to the massive R&D budgets of the market leaders. But there is a more subtle way in which IP connectivity and commodity hardware brings about a step change in cost.

Virtualisation

When using bespoke hardware, each device had a single function. But if we move to processes running in software, on standardised hardware, then we do not need a box for each operation, we simply need enough processors for peak demand, starting and stopping software processes as we need them.

This is virtualisation. The application is separated from the hardware: each function appears as a virtual machine, running on common, shared hardware, with some overseeing orchestration layer allocating processors as required.

The orchestration layer – the hypervisor – allocates memory, storage, processing cycles and connectivity as demanded by the application. It can even overlay virtual operating systems if that makes the application layer more efficient.

Virtualisation, then, offers reduced CAPEX because you do not need a machine per function, you simply need to be able to support sufficient virtual machines for the busiest times. That, in turn, means you can create a very flexible system architecture, because you are not connecting machine to machine, but simply linking a series of software processes. New workflows, and even completely new functionality, is created simply by defining, in software, which processes need to be applied to content, and allowing the orchestration layer to create the virtual machines as it needs it.

Cloud

The principles of virtualisation can be applied at a number of levels. You could transform the broadcast machine room into a server farm, or you could integrate the media functionality into the enterprise-level IT infrastructure. Processors could be running HR or business process management at one moment, encoding or transcoding the next.

This could prove very effective, for example, if you have a lot of non-realtime encoding jobs, which could be batched to run overnight when IT applications are lightly loaded. Keeping the processing on premises has the additional benefit of keeping content under your roof: many media companies are still nervous around intellectual property control.

Increasingly, though, businesses are turning to true cloud operations, whereby a service provider takes on the responsibility for providing storage and processing. This might be a third-party company providing specialist services, or it might be a business specialising in the cloud, such as Amazon Web Services (AWS).

The business model here is that you pay for storage space and for processing minutes, so you move from the traditional broadcast CAPEX spend to a largely OPEX account. That has tremendous implications: you pay for what you need, so you have a direct link between a service and the cost of providing it, making decisions around monetisation and commercial viability simple and transparent.

The relationship between media company and cloud provider will be defined by a service level agreement. It will define performance and availability of content and processing. It should also define the level of security which will be imposed around your content, which you should expect to be very high. For reassurance, remember that the American CIA uses AWS for its data!

Encoding

Encoding is a classic cloud application. Delivery of a new series of dramas will set a big demand for transcoding to multiple formats. The elasticity of a well-provisioned cloud service will mean that a large number of processor cores will be dedicated to the task, then released to other cloud clients when done.

The result of this hardware abstraction is operational simplicity and very high performance – higher than you could achieve in a traditional architecture without uneconomic levels of CAPEX.

The cloud is inherently secure and resilient, with data protection and redundancy defined in the SLA. When new versions of software come along, or you want to add new services, the cloud provider will undertake the necessary sandbox testing and verification, allowing you to add them quickly and with full confidence. New processes can be integrated quickly through the use of software-defined architectures.

The need to move large files around is currently providing some blocks to cloud migration, but with the use of file transfer acceleration and the wider availability of high-speed data circuits this limitation will fade away. The result will be that the cloud provides a solution that is attractive to the finance director because of the move from CAPEX to OPEX, and because of the direct correlation between a service and the cost of providing it; is attractive to the operational engineers because its elasticity gives a highly responsive solution; and is attractive to the audience because content will be available to them, on their preferred platform, faster and in higher quality.

You might also like...

NAB Show 2024 BEIT Sessions Part 2: New Broadcast Technologies

The most tightly focused and fresh technical information for TV engineers at the NAB Show will be analyzed, discussed, and explained during the four days of BEIT sessions. It’s the best opportunity on Earth to learn from and question i…

The Big Guide To OTT: Part 9 - Quality Of Experience (QoE)

Part 9 of The Big Guide To OTT features a pair of in-depth articles which discuss how a data driven understanding of the consumer experience is vital and how poor quality streaming loses viewers.

Chris Brown Discusses The Themes Of The 2024 NAB Show

The Broadcast Bridge sat down with Chris Brown, executive vice president and managing director, NAB Global Connections and Events to discuss this year’s gathering April 13-17 (show floor open April 14-17) and how the industry looks to the show e…

Essential Guide: Next-Gen 5G Contribution

This Essential Guide explores the technology of 5G and its ongoing roll out. It discusses the technical reasons why 5G has become the new standard in roaming contribution, and explores the potential disruptive impact 5G and MEC could have on…

Designing IP Broadcast Systems: Part 2 - IT Philosophies, Cloud Infrastructure, & Addressing

Welcome to the second part of ‘Designing IP Broadcast Systems’ - a major 18 article exploration of the technology needed to create practical IP based broadcast production systems. Part 2 discusses the different philosophies of IT & Broadcast, the advantages and challenges…