Cloud services and virtualization are providing new ways of connecting broadcast systems together. But what does it mean to connect software?
Many broadcast engineers have cut their teeth on hardware infrastructures where processing systems can be seen and touched. Signal inputs and outputs are labelled allowing simple connectivity, and fault finding involved chasing cable numbers through the infrastructure, assuming the schematic diagram had been kept up to date!
Although traditional software systems often communicated over custom serial protocols, and even the infamous GPIO, the more recent cloud and virtualized systems rely on a completely different approach through the RESTful API. These APIs are making great inroads into broadcast infrastructures, mainly because of their influence from internet connectivity.
RESTful APIs, combined with JSON data exchange form the backbone of most internet type websites. They sit on top of HTTP/TCP/IP so they can easily travel through firewalls. As port 80 (and sometimes port 8080) is associated with HTTP transfers, the control, monitoring, and data exchange is easily and safely facilitated in datacenter infrastructures. Furthermore, a plethora of software development libraries are supported in “internet languages” thus allowing straightforward integration into virtualized applications.
As broadcast engineers, the challenge we have with RESTful APIs is understanding how they interconnect. How does an ingest application send a media file to a proc-amp? This may sound like a simple question, but the answer will provide great insight into the new way of thinking we all must adopt. In essence, the ingest service will not send a media file to the proc-amp, but instead will send a URL describing the source and destination storage locations of the media file to the proc-amp service. This allows the proc-amp to retrieve the file directly from the storage and save its output to the appropriate storage device.
This simple, but wide-reaching change to signal processing provides incredible flexibility, scalability, and resilience. Changes to the storage system can be made independently of the control and processing services. And multiple services can be deployed to meet peak demand as the workflow dynamics change throughout the day and week. As we expand to the concept of stateless systems, then proc-amp services can run on multiple services all providing the same process function to a source and destination media file.
Due to the dynamic nature of the software service interconnectivity, it is virtually impossible to provide any form of schematic diagram of how the systems connect. Although this may at first seem to be problematic, instead it really points to how we must think about flexible and scalable systems going forward. By reducing the absolute and static connectivity, we significantly improve resilience, flexibility, and scalability, but at the cost of not knowing where every single connection is.
Instead of having to know where every connection exists at a moment in time, we rely on logging to provide forensic analysis should something go wrong. This might be down to network congestion or server overload but knowing how services interact after the event will help us prepare for the future and mitigate any potential future problems.
IP and its associated virtualization are bringing with it a new dynamic way of operating, and with this we must modify how we approach system design and maintenance. We must abandon our static ideas and move more towards thinking in terms of probability if we are to truly leverage the power of cloud and virtualized systems.