Productive Cloud Workflows - Part 1

IP is an enabling technology that facilitates the use of data centers and cloud technology to power media workflows. The speed with which COTS (Commercial Off The Shelf) hardware can now process data means video and audio signals can be transcoded, edited, and transferred with speeds that are fast enough for real-time live and file-based workflows.


This article was first published as part of Essential Guide: Productive Cloud Workflows - download the complete Essential Guide HERE.

Other articles from this series.


However, there is much more to building cloud-based broadcast facilities than just running microservices and SaaS applications. Software systems must be seamlessly integrated to provide stability and dynamic response to viewer demand.

A combination of RESTful APIs and JSON data objects are needed to maintain compatibility between processes that allows them to transfer and process video and audio signals.

Trying to replicate baseband and on-prem workflows directly in the cloud is fraught with many challenges. These include overcoming latency and completely changing our working practices to agile methodologies, including rip-and-replace. The dynamic nature of cloud systems demands that workflows must be designed with effective strategies to make the most of the scalability, flexibility, and resilience that cloud promises.

Although we can take existing workflows and replicate them in the cloud, to truly benefit from the many opportunities and efficiencies that cloud can offer, we should always design systems that are cloud aware.

Scaling Sideways

As a broadcasting community, we have spent the best part of eighty years committed to hardware technology. This is mainly due to the speed with which video must be processed, especially in real time. The sampled nature of video relentlessly delivers frames of video between 17mS or 20mS apart, and as the size of the raster increases then so does the data rate. Moving from SD to HD created five times increase in data rate and moving from HD to 4K quadrupled this further. Consequently, video processing has always been at the forefront of technology, and until recently, only hardware processing has been able to deal with these colossal data rates.

Developments in seemingly unrelated industries such as finance and medical have seen massive investment in COTS style processing, leading to the development of software defined networks and data processing. Finance with micro-trading, and medical with machine learning, has led to high volume data processing with incredibly low latency, exactly the type of processing that broadcasters demand for real-time live video and audio processing.

The development of the datacenter has further led to the concept of virtualization. Modern operating systems create events for input/output (I/O) peripherals such as keyboards, mice, and ethernet interfaces. This removes the need to for higher level applications to constantly poll the I/O devices leaving much more time for the CPUs to process applications.

Virtualization takes advantage of this as the access to all the I/O devices can be divided across multiple operating systems sharing the same hardware, thus improving the efficiency with which servers operate. It is possible to run multiple versions of an application on a single server, but the concept of virtualization creates greater separation between processes to improve reliability as well as security.

Virtualization effectively abstracts the hardware away from the operating systems that it supports, and it is this abstraction that has delivered cloud computing. Another way of thinking about cloud computing is through the idea of software defined hardware as the virtualizing software is effectively a server management system. This provides a user control method that allows the creation and deletion of virtualized servers. The ability to create multiple operating systems on one server provides the concept of cloud computing.

These ideas lead to the ability to rip-and-replace. That is, we can create and delete servers with a few software commands or clicks of a web interface. And it is this concept that is the most exciting for broadcast engineers as it is a completely different way of working from the traditional hardware workflows. This is because we now have the ability to just delete processing engines that are no longer needed thus allowing demand led resources that can be provided to meet the peaks and troughs of the business.

When spinning up servers we are creating operating systems that are hosting the required applications. But it’s important to note that even the most powerful of servers has a physical limit to the number of instances that it can host. Therefore, on-prem datacenters suffer from the same limitations as traditional hardware systems in that engineers must design for the peak production demand. Although public cloud systems also suffer from the same limitations as there are only a fixed number of servers in any one datacenter, the number of servers within public clouds is orders of magnitude greater than any on-prem datacenter resource any broadcaster could ever hope to build.

With public cloud providers continuously adding more servers, storage, and routers to their datacenters, it’s extremely unlikely that any broadcaster would ever exhaust the available cloud resource.

Key to making cloud infrastructures both resilient and efficient is understanding and accepting the concept of spinning up a resource and then deleting it when it’s no longer required. Automation achieves this by detecting when a workflow requires extra resource and then spinning it up as required. One method often adopted is to use message queues.

Dynamic broadcast workflows are job driven. That is, each action is given a job number and every job that needs to be processed is placed into a queue, then a scheduling engine pulls the jobs from the queue to process them. At this point, the scheduling engine will analyze how many jobs are in the queue and calculate the approximate time that will be taken to process the queue with the available resource. If the estimated time is greater than a predetermined value, then the scheduling engine will spin up new servers. If the estimated time is lower than a predetermined value, then virtual servers will be deleted thus reducing the amount of available resource.

An example of this is Ad ingest for a broadcaster. Earlier on in the week the number of Ads sent by the post houses and advertising agencies will be relatively low. But as the week progresses and we approach Saturday evening, that is when peak demand arrives, the number of Ads being delivered will increase massively. A cloud processing system may only use two transcoding instances during the earlier part of the week, but as we approach Friday, the number of transcoding instances may be increased to ten by the scheduling engine. And when the Saturday cut-off date has passed, the extra eight transcoding servers will be deleted. It’s important to note that the server instances will be actually deleted so as not to use any of the server resource. In pay-as-you-go cloud systems, this provides a huge saving for the broadcaster as only the servers needed to meet the demand will be actioned and hence, paid for.

As this whole process is effectively automated, the business owners are responsible for determining how many servers are made available to the scheduler. Therefore, the decision on available resource becomes a commercial undertaking and not necessarily an engineering one.

Supported by

You might also like...

Audio For Broadcast: Cloud Based Audio

As broadcast production begins to leverage cloud-native production systems, and re-examines how it approaches timing to achieve that potential, audio and its requirement for very low latency remains one of the key challenges.

Standards: Part 4 - Standards For Media Container Files

This article describes the various codecs in common use and their symbiotic relationship to the media container files which are essential when it comes to packaging the resulting content for storage or delivery.

Standards: Appendix E - File Extensions Vs. Container Formats

This list of file container formats and their extensions is not exhaustive but it does describe the important ones whose standards are in everyday use in a broadcasting environment.

Metadata Is Key To Unlocking AI’s Potential

Artificial Intelligence (AI) – which we should all really be calling Machine Learning - has found many applications within the media & entertainment world, driving innovation and pushing the boundaries of video production technology and advanced workflows. There’s a little sec…

Audio For Broadcast: Outside Broadcast Workflows

Outside broadcast adds layers of complexity to audio workflows. We discuss the many approaches to hybrid remote production and discuss the challenges of integrating temporary or permanently distributed production teams.