Broadcast engineers have a whole plethora of tools available in their kit-bag to integrate systems. The common denominators are SDI, AES and MADI for media exchange, serial and ethernet protocols for control, and the trusted GPI should everything else fail.
Third party vendors provide specialized operational control panels for “VT”, video and audio servers and even red tally lights. Products exist allowing us to route controls into one unit, detect a vision switcher and audio desk routings, determine which camera and microphone is on-air, and send a tally signal or perform routing in a matrix.
Cloud broadcasting does not have any useable serial ports or GPI’s. SaaS systems are generally closed and don’t take too kindly to be controlled by some esoteric system designed by a broadcast engineer to circumnavigate a complex workflow.
No GPI's in the Cloud
There are two challenges to overcome; firstly, the only Cloud connection we have is through the internet, and secondly, SaaS Cloud systems are generally web-apps and are not accustomed to being controlled by a VT control panel and jog wheel. There are no GPI’s, or Sony RS422 connections in the Cloud.
There are ways to control SaaS services but we must think like an Enterprise Software Developer and use protocols more akin to online banking and commerce.
To leverage the power of public and private Cloud computing, software providers must adopt the web server-client model. A web server sits behind a load-balancer in a Cloud datacenter and processes HTTP (Hyper Text Transfer Protocol) messages from a browser on the user’s computer. HTTP messages are divided and split between several IP packets, and in turn the IP packets are usually encapsulated by ethernet packets. Whether ethernet encapsulation is used or something like ATM depends on the underlying network.
An HTTP message representing the user input is sent to the web-server, this is processed and the result is sent back to the user’s browser.
Web-server applications such as SaaS are fundamentally different from broadcast applications as they are stateless operations. In other words, when a browser sends an HTTP message, the web-server processes that message, creates a result and sends it back to the browser, but doesn’t retain any information about that request. In effect, the web-server treats each request individually.
Cookies Identify You
Websites need to store information about the user for authenticated secure sites, that is, sites you need to supply a username and password to log into. This is also referred to as “stateful” information. Stateless and stateful systems are mutually exclusive so cookies are used to store stateful information on the users’ computer.
When a user log’s onto a website with a username and password, the website will create a random sequence of characters compliant with HTTP and unique to that user, store them in its database, and send them to the users’ computer. The sequence is stored in the users’ computer memory for session cookies, or on the hard disk drive if the cookie is persistent.
Persistent Cookies Never Expire
Each time the user sends a request to the web server it must include this session ID. When received, the web server looks up the session ID in its database so it knows which user made the request, and sends back a response based on the user-attributes.
Cookies can expire depending on the configuration of the web-server and the application being run. Session cookies tend to expire when the browser application is closed, but persistent cookies expiration time is set by the browser when the ID is first requested. The expiration date can be anywhere from a few minutes to many years.
Representational State Transfer (REST) web services are a way of integrating data between Cloud services through an Application Programming Interface (API). REST was developed by Roy Fielding as part of his 2000 PhD dissertation, and is a formalization of HTTP commands to exchange data, embracing the design philosophy of performance, scalability, simplicity, modifiability, visibility, portability and reliability.
REST is Stateless
At the core of REST is the assumption that we are using the stateless client-server computer model with HTTP. RESTful API’s are used to exchange data in the client server model using four different commands; GET, PUT, POST and DELETE. They vary in how much they affect the data within a web server; a system in computer science called side effects.
Understanding the REST commands is key for any broadcast engineer looking to fault find and log events within a network. The system relies on the client sending a request command to the server for some specific data. For example, a web browser will often send “GET /” to request the homepage of a website from a server.
GET is the safest of all four commands and is both safe and has no side effects. Continually sending a GET command to a web server will not change any data in the web server. This command could be used to get the status of a server, or retrieve the homepage.
Beware of Multiple Triggers
PUT is not safe but is idempotent. In computer science, idempotent describes an event that will give the same result if executed once or multiple times. This will stop multiple triggering of an event or continuous updating of data in the web server. It’s not safe because it has side effects, sending it will change a data value or parameter within the web server or database, but only once.
POST is not safe and is not idempotent. Sending a POST command multiple times will trigger the event the same amount of times in the web server. Like PUT, it’s not safe because it has side effects; some data within the server domain is changing. If we POST a “cue playout”” command and then send it five times, the media will recue five times, even if it is already playing. The developer needs to protect against this.
DELETE is not safe but it is idempotent. Like PUT it will change data, but only once.
We're in the Hands of Developers
Adherence to this system is entirely at the hands of the developers and assumes they both understand the subtle differences between the commands and have programmed accordingly.
As we move into private and public cloud systems, broadcasters must adhere to philosophies such as REST to leverage the cost savings within IT systems. The efficiencies of cloud computing rely on all parties speaking the same language, otherwise we create complexity, which will increase costs and decrease reliability.
You might also like...
As broadcasting moves to highly efficient production lines of the future, understanding business needs is key for engineers, and recognizing the commercial motivations of CEOs and business owners is crucial to building a successful media platform.
Unless you are a greenfield site, have one vendor to meet all your operational and creative needs, or are incredibly lucky, you will at some point need to integrate your Cloud Software-as-a-Service into the broadcast workflows. This is much easier…
There are many benefits to a file-based workflow. One of those is the ability to automatically check video for a laundry list of errors and if the file meets compliance specifications. It is important to understand what can and cannot…
In the previous Cloud Broadcasting article, we looked at the business case for public clouds. In this article, we delve further into Cloud Born systems and go deeper into cloud security.
Engineers face a variety of new challenges in measuring 12G-SDI signals, as well as ABR streaming signals. Testing 12G-SDI requires different techniques compared to those used with SD-SDI, HD-SDI and 3G-SDI signals. And monitoring ABR content requires additional signal preparation…