Implementation of software in public clouds might not be as straightforward as it seems. Outdated software licensing models restrict one of the fundamental advantages of cloud systems, that is their ability to grow and shrink as the dynamics of the business demands, sometimes by the hour.
Businesses rarely expand predictably, either in the short or long term. Longer term trends usually result in the IT Director over spec’ing resource and short term trends result in delays as equipment needs to be quickly procured and installed with little notice.
Public Cloud systems such as AWS and Azure provide a near infinite amount of resource at the click of a button. Long term trends can be satisfied when the Finance Director signs off the purchase order and short term trends can be met quickly. Even peak demand can be catered for as pay-as-you go computing lets infrastructure managers buy resource by the hour.
Artificial intelligence meets the requirements of peak demand. Jobs to be processed are represented in a queue and when this gets too large the software will automatically spin up more servers using disk images on virtual machines. As the jobs are processed and the queues diminish the software will stop allocating jobs to a virtual machine, close it down and delete it. Effectively stopping the charges associated with it.
The dynamic expansion and contraction of cloud resources is one of its major selling points and most important features. Artificial intelligence has been reborn as developers all over the world write algorithms that allow their software to start to think for itself. Not only does the software look at the number of jobs in a queue but it uses big data analytics to look at past trends to determine when to bring additional resource online, predicting the business trends.
True cloud computing requires software to spin up and delete servers to meet the peak demands of the business.
All this might sound like the utopian model of computing, businesses all over the world can rejoice as they no longer tie up capital in unused IT kit for hot spares and possible expansion. But there are some big pitfalls, especially if the service provider has bought in third party software tools and libraries.
Air-Conditioning Costs Escalate
Traditionally, broadcast manufacturers have designed their software using linear methods to map one to one directly to servers, resulting in a system that is time predictable but inefficient in resource. To increase the capacity, new servers would be purchased with the transcoding software and appropriate licensing, again this leaves the system time predictable but is inefficient in resource allocation.
It’s worth remembering that each time a server is added to a datacenter, not only do we have to provide electricity to power it, we also increase the power consumption of the air-conditioning system to take away the heat its generating. As most of the energy used by a server is converted to heat, the air-conditioning system will use the equivalent amount of power of the server to cool it down again.
Broadcast systems are typically designed for the worst-case scenario, in a transcoding farm the maximum number of servers are provided to meet peak requirements, and then a fudge factor is added to take into consideration expansion. It is possible to use virtualization, however, video input-output ethernet and disk demands are extremely high, these resources can often cause a bottleneck in the design requiring more servers to be added to the cluster thus negating many of the benefits of virtualization.
Cloud systems win hands down when we use dynamic expansion as we’re adding intelligent parallelism to the software and only use extra servers when needed on a pay as you go basis. This presents two great challenges for linear software developers; they must be able to detect changes in demand, and then provide the resource needed to accommodate the change.
Cloud Washed describes the method of mapping one to one servers using linear design methods into the cloud, and is a very quick way of burning lots of cash. IT Directors will very quickly provide costings demonstrating the high cost of cloud solutions compared to their datacenter equivalents, and they’re right.
Load balancing is required in cloud computing to meet up-time SLA's, this needs to be considered at the very beginning of the software design.
In cloud systems, detection of peak demand is easy using services such as AWS Simple Queue Service, a fully managed simple messaging system. It is possible to write your own software to provide this service, but pressure is now being placed on development teams to deliver new features and it’s no longer cost effective to write bespoke software when off-the-shelf systems can be easily provided.
The down side of using services such as SQS is that you must write the software from the ground up using proprietary libraries supplied by a single vendor. Quite soon, a system architect will find that they have used so many services from AWS or Azure such as databases, load balancers and SQS, that they are heavily tied into that vendor.
The decisions made in software systems now go above the head of the software architect and IT Director as they soon find themselves making commercially sensitive decisions that are way above their paygrade or skill set, and should be referred to the CEO.
The challenge for system architects, IT Directors and software engineers is that they are finding their job is changing from one of design to communication, where they must present their solutions to the CEO using the CEO’s language so the best solutions for the business can be made.
Cloud Born systems usually manifest themselves as Software as a Service applications, embracing the true nature of cloud computing and leveraging the costs savings. Cloud Washed systems are datacenters in the cloud, and will prove to be very expensive and difficult to maintain.
Unfortunately, many broadcast manufacturers (not all) are presenting their cloud systems as dynamic solutions, when they’re simply Cloud Washed systems. To fully utilize the cloud, they must demonstrate the ability to detect peak demands, spin up new servers, process the jobs and delete the servers when no longer required, only then can they truly call themselves Cloud Born.
You might also like...
As broadcasters accelerate IP migration, we must move from a position of theory to that of practical application. Hybrid solutions to integrate SDI, AES, MADI, and IP will be needed for many years to come, even with green field sites,…
Thanks to Over-the-Top (OTT) streaming video, content owners and broadcasters have a very different relationship with the end consumer – often a direct one.
OTT distribution is worlds apart from traditional unidirectional broadcasting in terms of its fundamental operation and viewing preferences. The internet is a rapidly expanding collection of service providers, many in direct competition, transferring broadcaster video and audio streams alongside many…
In the last two articles in this series we looked at why we need to monitor in OTT. Then, through analysing a typical OTT distribution chain, we sought to understand where the technical points of demarcation and challenges arise. In…
In the previous article in this series, “Understanding OTT Systems”, we looked at the fundamental differences between unidirectional broadcast and OTT delivery. We investigated the complexity of OTT delivery and observed an insight into the multi-service provider silo culture. In thi…