Egress and Ingress charges are often cited as limitations to adopting cloud workflows. But is there a better alternative that leverages the power of cloud workflows and keeps transfer costs to a minimum?
To truly leverage the cloud, we must stop thinking in terms of static workflows and capital investment. This might sound easy as we cannot physically touch the equipment but the services offered are only part of the story.
Moving existing workflows to public cloud infrastructures is very wasteful of resource as we don’t necessarily take advantage of the scalable nature of cloud computing. Keeping servers running when they’re not needed is inefficient and unnecessary and can even turn out to be more costly than on-prem datacenters. The real benefit manifests itself when we allow the infrastructure to react to the business needs, that is, the amount of resource required scales to meet peak demand.
These may be easy words to write, but what exactly do we mean by scale and how do we actually spin-up and spin-down servers? One option is to use work-jobs that create messages and queues in a buffer representing each workflow task, the processing engine takes messages from the head of the queue and executes the task. If the queue becomes too big, such as when the number of jobs entering the queues are at a higher rate than the processing engine can execute them, we spin up more servers to take on the increased workload. The opposite is true for spinning down servers.
Dynamic server allocation introduces a new level of complexity and demonstrates why workflows and their associated software programs must be written to take into consideration message queues, or similar, at the very beginning of the infrastructure analysis. This methodology does add another very important feature to the workflow through management monitoring. By measuring the queue size, and there will probably be many of them, system administrators can manage the size of the cloud resource available, or even automate the task.
These thought processes deliver much better efficiencies when we start to consider high-capacity storage. Many broadcasters are using and exploring hybrid media asset management systems where cloud storage and on-prem storage are integrated together. This gives the best of all worlds when considering cloud processing as servers executing jobs in the cloud benefit greatly from having the media asset stored near them. In this context, “near” refers to the same cloud region. Not only does this improve speed of execution but if the media asset is kept in the cloud, then the egress and ingress costs are significantly reduced.
Understanding when to keep the media assets in the cloud, and the access times with which they can be recovered is again another intelligent decision that an automated dynamic system can make.
Empowering the cloud requires deep analysis of workflows and an expanding of the mind when it comes to implementation. Keeping media assets in the cloud can improve efficiencies, especially when processing large volume media assets within the same cloud region.