Productive Cloud Workflows - Part 2

We conclude this two-part article examining how IP is an enabling technology that facilitates the use of data centers and cloud technology to power media workflows.


This article was first published as part of Essential Guide: Productive Cloud Workflows - download the complete Essential Guide HERE.

Other articles from this series.


Software In The Cloud

Unix was designed around the idea of small programs performing specific tasks that could be pipelined together. This provided much greater flexibility and reliability than writing a new program for every task. But as computing developed, applications moved to monolithic designs that were huge. Consequently, they were difficult to maintain and team collaboration proved challenging. New releases were infrequent and extensive testing was difficult due to the extensive number of combinations of inputs and data stored.

To overcome these issues, modern software applications are moving back to smaller, self-contained programs called micro services. These are much easier to maintain and test as they have a reduced domain of input with a much better-defined output range. Consequently, they maintain greater reliability, flexibility, and scalability. 

Fig 1 – the diagram on the left shows virtualized servers and the diagram on the right shows container systems. Virtualization is more flexible, but containers are much more lightweight.

Fig 1 – the diagram on the left shows virtualized servers and the diagram on the right shows container systems. Virtualization is more flexible, but containers are much more lightweight.

Examples of microservices are programs that provide specific tasks such as color correction and subsampling, transcoding, and YUV to RGB conversion. The idea is that multiple microservices are daisy chained together to provide solutions to complex workflow needs. Furthermore, they excel in cloud type environments.

To help with deployments and management of microservices, containers are often used. These are a lightweight alternative to virtualization but still provide a pay-as-you-go model through orchestration. This is a separate service that automatically spins up servers and starts and stops microservices on demand. Whereas virtualization requires a hypervisor management system that acts as an interface between the CPU and IO hardware, each container runs on the server operating system and provides a contained area of operation.

Containers derived their name from the early Linux cgroups, which later became Linux Containers (LCX) and then other solutions such as Kubernetes and Docker were derived. They allow the host servers operating system to allocate a certain amount of CPU, RAM, and operating threads to each container, thus providing a guaranteed resource allocation.

Furthermore, whereas the virtualized hypervisor solution can provide multiple and different types of operating system, containers are restricted to the host server kernel operating system. However, only the operating system user mode modules are provided in the container, so they stay lightweight and provide a guaranteed operating environment with which to work from.

Communication and control for microservices or larger apps running in either virtualization or containers relies on RESTful (Representational State Transfer) API interfacing. The RESTful API is particularly useful for cloud applications as the protocol is based on HTTP (Hypertext Transfer Protocol), that is, it uses commands such as GET and POST found in web server applications. This makes integration into cloud systems much easier as the cloud service providers use the RESTful APIs as the basis for public communication with their servers.

Cloud computing relies on the API being well defined, and in the modern agile computing paradigm, this is well understood and supported. The APIs are scalable and easily maintain backwards compatibility so that upgrades are quickly facilitated. The stateless operation of REST requires the client to server request to provide all the information needed for the server to process the data entirely. The client cannot hold any state information about the task being processed.

For example, if a client is requesting a transcode operation from a microservice it must provide the source file location, its destination, and all the parameters needed for the transcode process.

Data is exchanged between RESTful services using JSON (JavaScript Object Notification) type files. This is a lightweight and human readable text format which is programming language independent and has a familiar syntax to ‘C’ type languages. By human readable, we mean that JSON provides self-describing information within the file using parameter-value pairs. It’s similar to XML except that it is shorter and much easier to parse.

It’s important to note that REST and JSON are not intended to transfer large media files, instead, they provide file locations for the source and destination files as well as the data values needed for the operation. For example, a JSON file may contain the raster size and frame rate of the source file along with the raster size and frame rate of the destination file for an upscaling process. The file locations will be network mapped drives or storage objects similar to those used in AWS or Azure. The actual media files are accessed directly from the object storage by the application, and it relies on the cloud service provider delivering adequate network capacity and bandwidth to transfer the data for processing.

Keeping Media In The Cloud

One of the challenges broadcasters often consider when moving their workflows to cloud infrastructures is that of ingress and egress costs. Although these can be significant, it’s worth remembering that keeping media files in the cloud is a much more efficient method of working than continuously moving files between the cloud and on-prem. This requires a significant change of mindset but does pay dividends for costs and security, as well as flexibility and reliability.

Cloud object storage is not only an efficient and flexible method of storage but is potentially much more secure than on-prem storage. All the major cloud service providers use a token to describe the storage object of a particular media file. In appearance, this is very different to the hierarchical file systems often found with Windows and Linux, instead, it’s a unique long sequence of characters. When the media owner generates the token, they also include the type of access rights and available time. Not only is this unique, but if the media owner suspects a security breach, then they can remove the token from the media object, thus stopping access. 

Fig 2 – the media processing is accessed through the RESTful API using HTTP. When a user has requested a service, such as transcoding, it will store the media storage information as well as the transcoding parameters in the database. The scheduler polls the database for jobs and when it receives them it will spin up new resource as needed. The processing servers will access the cloud media storage directly for both the source and destination files.

Fig 2 – the media processing is accessed through the RESTful API using HTTP. When a user has requested a service, such as transcoding, it will store the media storage information as well as the transcoding parameters in the database. The scheduler polls the database for jobs and when it receives them it will spin up new resource as needed. The processing servers will access the cloud media storage directly for both the source and destination files.

When a client sends a REST API command to a server service, such as a transcoder, included in the JSON file will be the token for the media object that is being accessed. Furthermore, the media owner can see who has used the token, where and when, thus providing a forensic audit trail for the high value media.

Cloud service providers also provide backup storage such as Glacier in AWS, or Archive storage in Azure. These are long term storage systems that are much cheaper than the instant access storage as they often take several days to retrieve the media asset. This is similar to LTO used in on-prem datacenters. But if all the costs are considered, including physically hosting the LTO machine, supporting and maintaining it, and finding somewhere to store all the tapes, then this type of cloud storage can often be a better alternative. Again, the media files are kept in the cloud so security is maintained and network bandwidth is supported by the host cloud service provider.

For example, a ninety-minute HD baseband media file will use approximately 537GB of storage. AWS Glacier costs approximately 0.36 cents per GB of storage per month, resulting in a cost of $1.93 per month for a ninety-minute asset, or $23 per year.

Using this example has a major advantage as it allows the business owner to make an informed decision on whether the asset is worth storing or not. This is clearly a tough call, because none of us know what the future has in store, but these decisions are put firmly back into the hands of the business owner and not the engineer. One of the challenges we have is knowing what to keep and what to delete. This provides a straight-forward metric to help establish this.

Making Cloud Work

Cloud workflows require a different type of thinking, the dynamic nature with which cloud operates demands a transient approach to workflow design. To make cloud efficient, processes that are not being used must be deleted, and this in itself requires systems to constantly analyze how the overall cloud system is responding to the operational demands.

Metadata availability is in abundance within the cloud and broadcasters can take advantage of this. The enormous amount of monitoring data available can be analyzed to understand where components within the cloud infrastructure can be closed or deleted so they are not unnecessarily using expensive resource. Furthermore, they simplify systems making them 

Supported by

You might also like...

Essential Guide: Next-Gen 5G Contribution

This Essential Guide explores the technology of 5G and its ongoing roll out. It discusses the technical reasons why 5G has become the new standard in roaming contribution, and explores the potential disruptive impact 5G and MEC could have on…

Audio For Broadcast: Cloud Based Audio

As broadcast production begins to leverage cloud-native production systems, and re-examines how it approaches timing to achieve that potential, audio and its requirement for very low latency remains one of the key challenges.

Standards: Part 4 - Standards For Media Container Files

This article describes the various codecs in common use and their symbiotic relationship to the media container files which are essential when it comes to packaging the resulting content for storage or delivery.

Standards: Appendix E - File Extensions Vs. Container Formats

This list of file container formats and their extensions is not exhaustive but it does describe the important ones whose standards are in everyday use in a broadcasting environment.

Metadata Is Key To Unlocking AI’s Potential

Artificial Intelligence (AI) – which we should all really be calling Machine Learning - has found many applications within the media & entertainment world, driving innovation and pushing the boundaries of video production technology and advanced workflows. There’s a little sec…