The Sponsors Perspective: The Answer Is: “Yes We Can!”

Flexible architecture opens new business possibilities.

This article was first published as part of Essential Guide: Empowering Cloud Through Microservices - download the complete Essential Guide HERE.

Elevate Broadcast Pte Ltd. was one of the early adopters of Grass Valley’s agile media production and distribution platform, AMPP. Ever since their adoption of AMPP, they’ve been regularly using it for a wide variety of projects, from simple signal transport and monitoring to full live remote productions in the cloud. Elevate Broadcast has found that transitioning to a modular, microservices style architecture has increased their ability to quickly respond to the changing needs of their customers.

Dennis Breckenridge, CEO of Elevate Broadcast, explains: “One of the challenges in many of the IP environments is that you end up with all these gateway devices that are plumbed together to create a workflow. We can make that work for some situations, but it doesn’t have the same flexibility. You can’t say today I need eight inputs and tomorrow they need to be outputs. Or you can have a video switcher for this show but use those same resources to shuffle audio for the next show. AMPP doesn’t force you to make these decisions that you then have to live with.”

While many of the projects using AMPP do have elements of cloud operations in them, Breckenridge was quick to point out that the flexibility that AMPP provides makes it just as useful in an on-prem environment.

“Last year we built out a big production center. In that case, AMPP is connected to a SMPTE ST 2110 world. The nodes sit in our data center. Then we use it for all kinds of things.

“We use AMPP extensively for format conversion from 1080i to 1080p productions, to do contribution to AI engines for editing or other processing. We use AMPP if we need to post process any signals, for example to multiplex or shuffle audio and then convert the feed to SRT or RIST. Rather than going out and buying converters and all those types of edge devices, we just feed the 2110 signal into AMPP, and it provides what we need.”

GV AMPP Architecture

AMPP is a cloud-first microservices architecture that consists of a Grass Valley operated multi-tenant control plane – which is provided as SaaS – as well as a private customer video processing data plane that can either be in the cloud or on the ground. This enables extremely flexible workflows that have all the advantages of the cloud while recognizing that for some use cases, processing video at the edge makes more economic sense.

Cloud First

AMPP takes advantage of all the native services available in public cloud platforms. It consists of a set of microservices that are distributed across many physical computers in multiple availability zones. These architectures are normally defined as being high availability because the work is distributed across many microservices and one of which can fail without impacting the overall performance of the system. This provides better performance and reliability than you would get with a traditional “lift and shift” approach to the cloud which means taking some traditional monolithic software and simply running it on a dedicated VM in the cloud.

Single platform view - platform application and data layer spanning three availability zones.

Single platform view - platform application and data layer spanning three availability zones.

Microservices And Kubernetes

A single AMPP platform control plane is distributed across clusters of compute in different availability zones. AMPP operates on platforms distributed around the world so that customers access a platform that is local to them. Managing many microservices distributed over multiple data centers requires a management layer that handles the lifecycle of stopping and starting all the individual services and managing the resources they have available. Grass Valley uses Kubernetes to manage the control plane. Kubernetes groups containers that make up an application into logical units for easy management and discovery. Kubernetes builds upon 15 years of experience of running production workloads at Google, combined with best-of-breed ideas and practices from the community.

The AMPP Data Plane

Real-time video processing happens on the AMPP data plane. The data plane infrastructure may run on-premise or on a public cloud hosted virtual machine and is private to an individual customer account.

Many individual AMPP applications can be deployed on a single compute node. These apps can be stopped and started individually as needed, but they all share access to a common set of shared 10bit YUV uncompressed video flows so that multiple apps can interact with the same frames of video very efficiently without incurring any significant latency.

Within the same data plane, you can run many copies of the same app with its own specific configurations. These are called workloads and can be managed from a central application called the Resource Manager. The advantage of this approach is different productions can have their own workloads which can be stopped and started as a block while preserving all their individual show setup.

Ian Fletcher (left) and Chris Merrill (right).

Ian Fletcher (left) and Chris Merrill (right).

A New Way Of Working

While it is common to begin experimenting with AMPP as a one-to-one replacement for a specific hardware-based workflow, its true power lies in its ability rapidly provide whatever workflows are required at any given moment.

“The beauty of AMPP,” said Breckenridge “is that it is a toolbox that can be applied in so many ways. Before AMPP we had to build up a stock of converters and changeovers, clean quiet switches, and routing panels – a whole warehouse of purpose-built kit. Now we can be much more dynamic. We can add more inputs and outputs, we can scale the network, we can manage all different types of flows, and then bring all of that into whatever production environment we need: SDI, IP, cloud, hybrid… It really doesn’t matter.” 

You might also like...

Standards: Part 9 - Standards For On-air Broadcasting & Streaming Services

Traditional on-air broadcasters and streaming service providers use many of the same standards to define how content is received from external providers and how it is subsequently delivered to the consumer. They may apply those standards in slightly different ways.

An Introduction To Network Observability

The more complex and intricate IP networks and cloud infrastructures become, the greater the potential for unwelcome dynamics in the system, and the greater the need for rich, reliable, real-time data about performance and error rates.

Designing IP Broadcast Systems: Part 3 - Designing For Everyday Operation

Welcome to the third part of ‘Designing IP Broadcast Systems’ - a major 18 article exploration of the technology needed to create practical IP based broadcast production systems. Part 3 discusses some of the key challenges of designing network systems to support eve…

What Are The Long-Term Implications Of AI For Broadcast?

We’ve all witnessed its phenomenal growth recently. The question is: how do we manage the process of adopting and adjusting to AI in the broadcasting industry? This article is more about our approach than specific examples of AI integration;…

Next-Gen 5G Contribution: Part 2 - MEC & The Disruptive Potential Of 5G

The migration of the core network functionality of 5G to virtualized or cloud-native infrastructure opens up new capabilities like MEC which have the potential to disrupt current approaches to remote production contribution networks.