Building Software Defined Infrastructure: Effective API’s

Examples from IT and gaming show that the reliable exchange of data between applications from different vendors, often comes from commercial collaboration around establishment of clearly defined protocols.
Television is a story of standards. From SMPTE to EBU, broadcasters have had thousands of standards to thank for the continued success of television. Switch on any TV in any country across the world and the viewer will see the pictures and hear the sound.
All this interoperability is down to the millions of people-hours employees, enthusiasts and volunteers have spent agonizing over how television should work and then developing the necessary standards to define the solution.
So, what’s the problem with standards? The simple answer is that they take a long, long time to design, agree, and write. There is good reason for this as television is built on the premiss of backwards compatibility. As new features are developed, existing systems must be supported. The first major example of this was the introduction of color. NTSC and PAL standards were developed so that viewers with existing black and white televisions could watch the color transmissions without any disturbance to their viewing experience while viewers with the new color televisions could watch the new broadcasts.
Shackles Of Backwards Compatibility
The progression of television over the past ninety years has been built on the idea of backwards compatibility. But this has both been its greatest strength and its greatest weakness.
One of the reasons the internet has been so successful is that it uses IP as its transport system and HTML/HTTP to deliver web pages. IP is particularly important as it was designed to be hardware agnostic and open. Proprietary systems have a great deal of importance in commercial applications as they guarantee a certain level of quality. Open systems, such as open-source software cannot always boast this, especially in its early release. But what it does offer is collaboration on a massive scale.
It’s important to remember that IP wasn’t designed for broadcasting, it was designed as a method of exchanging data between computers that was hardware agnostic and highly deterministic. Also, it’s open nature allowed many designers and developers to move quickly and develop and implement the standard without having to go through multiple committee layers. Committee led standards are of course extremely important as SMPTE and the EBU have demonstrated, but the adoption of IP and all the other protocols that support the internet have demonstrated there is another way.
Alternatives To Standards Committees
Before demolishing the concept of standards committees, it might be interesting to see what the world would look like without them. In the early days of television, technology moved much slower. There was a need to define the number of frames and lines in an image, as well as the color space and audio format as there was no other way of presenting the information to the viewers television set. And even if you did have a choice of the number of lines, televisions right up to the 1990s were restricted with the formats they could work with.
It wasn’t until the adoption of flat screen televisions did we see a greater number of formats each television could decode. The limiting factor pre-flatscreen was the copper coils that made up the electromagnets in the CRTs. In these times, quite simply, if we did not have SMPTE/EBU/ITU etc., television would not have worked as reliably as it did and continues to do.
An interesting parallel to the development of television is what is going on in the gaming industry. It seems like frame rates are continuing to increase, the number of video lines are expanding, and color spaces are improving all the time in this space. And there doesn’t seem to be any committees defining the standards before the new systems are developed. One reason is the longevity of modern computing equipment, or lack of it. In the CRT days, televisions were expected to last many years due to their initial massive cost. But as technology has developed then so the cost of modern viewing equipment is on a decline. It’s not unusual for people to replace computers and laptops every two or three years, perhaps even less for gaming enthusiasts. Does the concept of backwards compatibility exist in the fast-paced world of gaming?
Computer systems are delivering equally outstanding progress and innovation. Here, the transitions are often delivered by competing manufacturers who see collaboration as a way forward. For example, in 2007, Microsoft, AMD and Intel worked together to replace the BIOS motherboard system with the then new Universal Extensible Firmware Interface (UEFI). The commercial demands of these huge companies meant that UEFI was developed relatively quickly. Also, the specification was open-sourced so that other motherboard manufacturers could build compatible systems.
There seems to be a recuring theme of success that when a group of commercial organizations collaborate to build a standard, and then open-source it, the solution becomes very popular throughout the world. In effect, this then transitions to be community led with commercial support.
RESTful APIs
One method that exemplifies the interaction of community and commerce is the RESTful API. Representation State Transfer (REST) was defined by computer scientist Dr. Roy Fielding in his doctoral dissertation. He defined a set of constraints that provide universal rules to allow developers to integrate software together. Prior to REST (pre 2000), developers used hand-written XML documents with a RPC (Remote Procedure Call) to allow software programs to communicate over the internet.
This led to the development of RESTful interfaces and the adoption of OpenAPI which is a specification used to describe RESTful APIs. OpenAPI is not the API itself but a specification to define the RESTful API for the developer’s application. SmartBear originally developed OpenAPI but later donated it to the Linux Foundation and made it open source.

Diagram 1 –This HTTP POST request to a multiformat media ingest microservice tells the app what input to expect and which output format to use as an endpoint to stream it to. Although an obscure PC input format has been selected, the microservice must be able to convert from this and the RESTful API document will clearly show the accepted input and outputs.
RESTful APIs use HTTP methods such as GET, POST, PUT, and DELETE to send and request information between a client and server and specifies the action to be taken by the microservice. Each API request should be self-contained so that it maintains the stateless principle where all the necessary information is provided from the client to the server allowing the microservice running on the server to process the media.
Processes running within a web type environment, including microservices, use a Global Unique Identifier (GUID). This is a 128-bit unique number that is used to identify objects such as media files and microservice instances, thus allowing each object, whether storage or processing, to be individually referenced. This allows the client to request specific actions on specific media streams within the microservices and containers that make up the software defined architectures.
If a multiformat media ingest microservice is running in the software defined infrastructure, then the controller that creates the microservice will also provide it with a GUID. When the controller, possibly an app or web service running on a PC, wants to send a command to tell the ingest microservice to start inputting a stream and converting it to the broadcasters’ mezzanine format, it will include in the HTTP POST request the GUID associated with the multiformat media ingest microservice. This stops any confusion from occurring so that the correct microservice is addressed.
The controller may want periodic updates from the microservice and will still use the GUID, but this time provide a GET request in the HTTP header. The number of HTTP requests from the client to the microservice application is often restricted to prevent network flooding or denial of service attacks. For example, in their public RESTful API, GitHub allow 60 requests per minute for non-authenticated users, and 5,000 per hour for authenticated users.
As can be seen from Fig1, the input format to the microservice is not a traditional broadcast format but has been defined by the vendor without the need to comply with a specific standard. This speeds up integration while at the same time maintaining order and ease of operation within the infrastructure.
RESTful APIs also work alongside zero-tolerance security as they provide an authenticated token in the header of the API request which is only known to the user and the system operating the broadcaster’s software defined infrastructure, thus guaranteeing a high level of security.
Multi-Vendor Integration
Although APIs provide an extremely flexible way of controlling and monitoring microservices and their functionality, we still need to address the issue of how do we physically move a signal within a datacenter. This is particularly troublesome when multiple vendors are trying to send and receive video and audio between each other. There are no SDI connections in the public cloud and moving video and audio data takes on a whole new meaning, especially if the microservice apps run on the same physical hardware.
To address this, the EBU have expanded their Digital Media Facility (DMF) reference architecture to include their Media Exchange Layer (MXL), which is a proposed solution to how we move media around datacenters, including on- and off-prem cloud systems. The ‘MXL Project’ is a new venture that includes a significant pool of vendors alongside the EBU working in conjunction with the Linux Foundation. The project, which is at working proof of concept stage as of July 2025, expands the API by providing an open SDK that abstracts the underlying hardware from the API so that multiple vendors can send and receive media streams, or video and audio signals as we used to call them. This is particularly useful as it should enable vendors to exchange media across different architectures and vendor IT systems without having to get involved with the hardware layers.
Easily Definable
Key to the RESTful API philosophy is that it is a software interface that can be easily defined by the developer through a simple document. The API can include data structures, commands, and variable definitions to make it clear what the microservice can achieve and how. As OpenAPI have defined the architecture for the RESTful API documentation, there is no need to have reams of standards documents to define how the interface should work. It’s self-documenting and self-explanatory.
Part of a series supported by
You might also like...
Production Delivery Specifications - The Broadcast Standards Essential Guide
This Essential Guide provides a unique reference resource for production companies or teams preparing to package and deliver assets to broadcasters & streamers. It gathers the published content delivery specifications from the DPP, Netflix, Apple TV+, NABA, The BBC and…
Monitoring & Compliance In Broadcast: Monitoring Compute Systems
With the ongoing evolution from dedicated hardware towards software running on COTS and cloud-compute infrastructure, monitoring compute resource is vital.
IP Monitoring & Diagnostics With Command Line Tools: Part 9 - Continuous Monitoring
Scheduling a continuous monitoring process will detect problems at the earliest opportunity. If the diagnostic tools run often enough, they can forecast a server outage before a mission critical failure happens. Pre-emptive diagnosis and automatic corrections are a very good…
Navigating Streaming Networks For Live Sports: Broadcaster OTT & Streaming Delivery Networks
With the ongoing growth of OTT content consumption, and the drive from live sports broadcasters to provide high-scale and high-quality Direct to Consumer OTT services, Streamers and their customers now demand streaming services that operate at the scale and quality…
Live Sports Production: Camera To Truck
Much of the OB production infrastructure has moved to IP, but has the connectivity between the cameras and the OB or backhaul also migrated to IP?