Playout & Transmission Global Viewpoint – July 2021

Why Lift And Shift Fails

Cloud computing is arguably one of the most important advances in broadcast television since the development of the electron scanning beam. Empowered by the adoption of IP, cloud is proving its worth in all aspects of television. But what is it about cloud computing that makes it so unique and valuable for broadcasters?

Lift and Shift is a term that describes a particular aspect of cloud methodology. Essentially, a user will take the existing software part of their application, install it on a cloud server and provide it as a software service. But without a whole lot of other adaptions, this strategy will ultimately fail.

Computing in the public cloud earns its efficiencies, resilience, and flexibility through the recognition of peak demand and system scaling. However, for as long as we can remember, broadcasters have been designing their infrastructures to deal with the peak demands of television programs. This has often led to the procurement of incredibly expensive equipment that spends a great deal of time sat around doing nothing, much to the disdain of the CFO and CEO.

For some time, broadcasters have been moving to software systems without realizing it. A massive number of processing solutions have been transferred to x86 technology in the guise of 1U pizza boxes that take a minute or two to boot. But this has been to our advantage and broadcasters have recognized the benefits of using COTS type equipment over custom design. I’m sure we will always need some form of hardware customization, especially with the human interface, but much of the heavy processing can now take place in servers with minimal latency.

In its worst incarnation, lift and shift is extremely wasteful of resource as it relies on servers constantly running to facilitate processing jobs, essentially replicating peak demand allocation. Even when there are no jobs to process, the established number of servers are always running. Furthermore, a vendor may insist on the processing happening in isolation thus attracting potentially inefficient data ingress and egress costs. Why download the video and audio output from a cloud function only to send it back a short while later to be processed by another vendors solution?

All a vendor has done in this scenario is to move the static peak demand solution from on-prem to the public cloud. Paradoxically, this may result in even more costs than procuring specific equipment to facilitate a task, especially when we take into consideration the ingress, egress, compute, and storage costs.

The real magic of cloud computing happens when we start thinking in terms of dynamic systems that scale. Other industries have perfected these methodologies and benefit greatly from dynamic infrastructures. Key to facilitating dynamic systems is the ability to spin up and spin down server resource as it is needed.

Scaling, that is spinning up servers and spinning them down again, occurs when the compute resource understands the number of jobs that it needs to process. If cloud computing is completely embraced, then this becomes a relatively easy task to achieve through message queueing and management. Each time a broadcaster needs to perform a specific task, such as transcoding, the ordering systems facilitates an entry into a job queue. The processing engine removes jobs from this queue and if there are too many in the pipeline then it demands new resource by spinning up more servers to increase the transcoding capacity. The opposite occurs when the job queue is empty, thus providing compute resource only when it’s needed.

Simply moving a processing application such as a transcoder or standards converter to the public cloud in a lift and shift manner will not leverage the full efficiencies available. However, fully embracing the public clouds scalability will deliver broadcasters with unprecedented efficiency, flexibility, and resilience.