Serverless Computing: What, How, Why, When?
For many people the idea of “serverless” software applications might seem counter-intuitive, or even confusing. Software surely needs to run on some kind of computer system somewhere? Isn’t that always called a “server”? So what does it mean when we talk about serverless, how can this approach help us, and when should we use it? And is it time to go “Serverless First”?
The concept of a server, as applied to computing, was first described in a 1965 academic paper and the idea gained traction as networking technologies developed in the 1970s. A server became a convenient means to store files, manage printing, perform data processing and so on. Over time, the word “server” has become everyday shorthand for any back-end computer system; one that most people don’t see, one that’s usually interacted with remotely, and one that runs continuously performing one or more specific functions on-demand for users or for other servers.
Although this architecture served us well for many years, there are some burgeoning problems with it. Moore’s law has reduced rooms full of whirring hardware to pocket-sized devices, and we’re ever more concerned with the cost and impact of energy consumption. Energizing a complete system 24/7 for a dedicated task, and specifying it for the peak needs of that task, is starting to feel a bit decadent. A traditional office print server, for example, caused frustration if users were waiting too long for printouts at peak times, but it was only used at all for 40 hours out of 168, and was only at peak load for a fraction of that.
The 21st century has brought a few innovations to help us with this; server virtualization has become the backbone of most serious computing installations, and containerized microservice architectures are increasingly the standard way to build and deploy software, often using container orchestration platforms such as Kubernetes. This all improves the density and efficiency of deployments, but comes with a complexity overhead and requires true scale across multiple applications and locations to maximize the benefits. To take forward the earlier example, there’d be little value in deploying a fully-fledged virtualized Kubernetes platform solely to manage office print servers!
Media Use Cases
If we start to think about more media specific examples, VOD processing is one where we find much variability in demand. At certain times of day, perhaps just after regional news programs have aired, broadcasters have enormous queues of content they want to process and make available on VOD platforms as soon as possible. If a major sporting event is on, there’s exceptional demand to edit, conform and process highlight content in a timely fashion. At the other end of the spectrum, early mornings, particularly at weekends, can be relatively idle for transcoding and conforming systems.
What if we could rent the exact amount of computing capacity we needed, exactly when we needed it? We could deploy and execute our code, along with our data, and pay only for the resources used and the time we used them. This is the core proposition of serverless functions. In the background there are still servers somewhere, but they’re operated and managed by a cloud provider, with multiple layers of abstraction from the hardware, they generate enormous scale by making them available at multiple locations to thousands of customers to use only when they’re needed. They incorporate management, maintenance and cross-site resilience into the proposition. It’s a bit like a city bike hire scheme - the bikes are waiting for you at a convenient location, you unlock one, ride it to where you need to go, and pay for the time you used it. You don’t need to buy a bike, worry about maintaining it, or having it stolen.
Alongside serverless functions, cloud providers offer pay-as-you-use serverless databases, APIs, message queues and other supporting tools. These can create great architectures for automating transaction-based workflows like file deliveries, metadata submissions and API integrations. In media processing we need high availability but have a relatively low and highly variable demand for transactions. Using serverless can offer significant savings on the traditional server approach, especially as it can replace a separate DR proposition as well. And for low-traffic web applications, like monitoring or reporting portals, using flat files with HTML and JavaScript triggering serverless functions via serverless APIs is very cost effective. This kind of architecture scales up and down perfectly aligning with usage, and incurs almost no cost when it isn’t being used at all.
Limitations
Like hire bikes, serverless systems do have limitations. They’re not currently able to directly run long compute-intensive workloads like transcodes for example. But we can leverage the cloud provider’s API with serverless functions to create and control transcode server instances dynamically. The function can spin up a transcoder, present it with the source file, and initiate the required transcodes. When the job’s finished another function can be triggered to shut down the transcode server and deliver the output files. With this kind of architecture, we could have as many transcode servers as we need for the peak load, but best of all, we can have no transcode servers when there’s nothing to do.
As with many things, the biggest challenge to using serverless architectures is our legacy of people, processes and tools that aren’t geared up to understand, build and maintain them effectively. And serverless systems tend to be very tightly coupled into a cloud provider’s ecosystem, meaning significant adaptation would be required to deploy the same workload with another provider. Nevertheless, the savings and sustainability gain to be had from using serverless are substantial enough that these limitations are worth accepting and working through, and getting started is very straightforward.
So before we rush to turn everything into microservices running on some guise of Kubernetes, it’s worth considering when and where serverless architectures might be a better alternative. Some organizations, mostly beyond broadcasting, have adopted the idea of “serverless first” and look at whether an initiative can be best met with a serverless architecture before diving into a more complex microservices approach. This is something worthy of consideration.
You might also like...
Delivering Intelligent Multicast Networks - Part 1
How bandwidth aware infrastructure can improve data throughput, reduce latency and reduce the risk of congestion in IP networks.
NDI For Broadcast: Part 1 – What Is NDI?
This is the first of a series of three articles which examine and discuss NDI and its place in broadcast infrastructure.
Brazil Adopts ATSC 3.0 For NextGen TV Physical Layer
The decision by Brazil’s SBTVD Forum to recommend ATSC 3.0 as the physical layer of its TV 3.0 standard after field testing is a particular blow to Japan’s ISDB-T, because that was the incumbent digital terrestrial platform in the country. C…
Designing IP Broadcast Systems: System Monitoring
Monitoring is at the core of any broadcast facility, but as IP continues to play a more important role, the need to progress beyond video and audio signal monitoring is becoming increasingly important.
Broadcasting Innovations At Paris 2024 Olympic Games
France Télévisions was the standout video service performer at the 2024 Paris Summer Olympics, with a collection of technical deployments that secured the EBU’s Excellence in Media Award for innovations enabled by application of cloud-based IP production.