RESTful APIs are the cornerstone of communications between users and microservices. Building efficient, flexible, and resilient interfaces relies on a deeper understanding of how internet services communicate.
All articles are also available individually:
Cloud computing relies on the underlying protocols governing the internet. HTTP is the protocol that facilitates data exchange between browsers and servers, and HTTP in turn operates over TCP/IP. In the early days of internet computing, a browser would request or send data to a server, which led to the need for a common protocol. The browser function then developed into applications but because it maintained the operation of the HTTP/TCP/IP protocol, these apps could also communicate with servers and replace the browser.
Users generally employ one of two methods of communicating with a cloud-based server, either through a web browser, or an application. Whether using a cell phone or desktop computer, the user communication method falls into one of these two camps. But from the point-of-view of the server, it doesn’t really care whether it is communicating with a web browser or a custom application as the command and data exchanges are essentially the same.
There is a small caveat with screen size as the web server will often need to know what size of screen its data is being rendered to. A large desktop screen will have a different layout to the small screen of a mobile device leading the server to deliver different formatting information, but this is often taken care of in the data exchange when the service is first connected so the relevant CSS file can be provided by the server.
What is RESTful?
REST (REpresentational State Transfer) is not a protocol but a software architecture that provides an interface between devices that are physically separated. Applications residing within the same server may well use other methods of communicating and exchanging data such as shared memory or local pipes. However, these methods require tightly coupled interfaces and for the programs to reside on the same physical machine, thus resulting in a serious limitation for scalability.
In more formal terms, a REST compliant server will transfer to a client a representation of the state of the requested resource. A broadcast example of this would be a user requesting the number of files in the transcoder job queue, the user requests through an API the state of the transcoder queue (resource), that is, the number of files in the queue, their size and when they were transferred. The representation (or information about the resource) can be stored in a JSON, XML, or HTML format.
HTTP is an application layer protocol that facilitates the collaboration of hypermedia information systems. In other words, it allows distributed computers to communicate using a well-defined protocol that is not only understood throughout the world but operates on top of TCP/IP thus making systems scalable and flexible.
Although REST was developed in parallel with HTTP by Roy Thomas Fielding, REST is not directly tied to HTTP, but HTTP it is most commonly used with REST.
REST defines five architectural constrains to make a web service completely RESTful:
- Uniform Interface
- Layered system
Although we are predominantly considering microservices, it’s worth remembering that to achieve the scalability that makes them so attractive, broadcasters will probably be using public cloud based microservices alongside their private datacenters. Consequently, to maintain complete freedom and flexibility, the APIs employed must comply with the requirements of the World Wide Web.
REST has at its core of operation the concept of resources. There are alternatives to REST such as SOAP (Simple Object Access Protocol) and RPC, but at their core they use the concept of procedures and methods. REST is considered more lightweight to implement with human readable data exchange and SOAP is considered more difficult to set up and develop.
From the perspective of REST, a resource is anything that can be named, such as a transcoder, standards converter, proc-amp, etc. And each of these resources is identified using the URI (Unique Resource Identifier), sometimes referred to as endpoints.
Figure 1a shows how a URI is used to access a resource group that is then subdivided further. The “transcoder_jobs” resource when queried will provide a list of jobs waiting to be processed. Figure 1b shows the state information of a specific job in response to a query such as a GET request.
Figure 1a) A Unique Resource Identifier (URI) showing the resource transcoder_jobs. Fig 1b) showing the results of requesting the state of a job in the transcoder queue, probably using the HTTP GET method.
The fundamental reasoning behind the RESTful API is to separate the public interface and the process doing the work, in this case, the transcoder microservice. That way, the resource can be updated independently of the process querying it or updating its state.
The client-server model allows the client and server services to evolve independently of each other. This allows the code on the client side to be changed at any time without having to worry how this will affect the server side.
Maintaining backwards compatibility is crucial for this to be satisfied resulting in a loose coupling between the client and the server. By communicating through a global URI, the client does not need to know which specific server it is working with. Instead, the URI is mapped on the server side by a device such as a load balancer, and this forms the basis of scalability. If the load is too much for one server, the orchestrator can spin up other servers to take up some of the increased load. When the load decreases, the server will be deleted.
Stateless forms another key component for scalability. Being stateless means that the client provides enough information for the server to complete a task. In IT terms, this means that the server does not hold any information about the state of the process. For example, if a user pushes a button on the web page to request some data from the server’s database, it must provide enough information for the query to be executed by one server. If the user then presses the button again, it must resend all the request data and cannot assume the server has any knowledge of the process that has just been requested. Furthermore, a different server may process the second query as the load balancer may direct the query to it.
This is all well and good for very quick blocking type queries. However, broadcast operations often take a long time and waiting for a twenty minute transcode to complete before the user’s web page can update is not desirable. To overcome this, the transcode process is considered to hold one of three modes: start, running, or stop. When the client requests a transcode operation it will send a start message, the load balancer may schedule the process to a microservice running on server A and keep it there until it completes and changes from to the stop mode. But server-A will send back a “running” response with a unique identifier in its return JSON file so that the client can then query the process every second or so to determine its progress. Although this is a long process, it is still considered stateless as the process is self-contained, and if the start message is resent by the user, the orchestrator will create a new process with a new state (hopefully the logic in the orchestrator will have realized this has happened and form some sort of arbitrage to stop two simultaneous processes from running).
Caching is an efficient method of speeding up internet responses as data can be held closer to the client and not continuously request the same data from the server. In RESTful terms, each response is labelled with a cacheable or non-cacheable method so that any intermediate caching servers know whether they should cache the response or not.
Although this caching greatly improves network efficiency and user response times, it makes the fundamental assumption that the data from the server doesn’t change much. However, in real-time broadcast facilities this is often not the case, and most responses will carry the non-cacheable messages.
Layering facilitates architectural abstraction so that a top-down dependency can be maintained. An example of this is to improve security as higher layers can verify user requests as part of a zero-trust type strategy. Users sending a message to a microservice may initially connect to an API layer that verifies the security of the access, or the format of the data being sent. Only when the message has met the requirements of the API layer will it then be passed on to the lower microservices layer.
Providing a layered system requires the architecture to be built into the design from the very beginning.
Designing microservice ecosystems requires careful API design as they must be backwards compatible and provide independence between the client and server applications. Software development teams are no longer restricted to one building, city, country or even time zone, and careful RESTful API design is key to the success of the team’s ability to deliver features quickly and reliably.
You might also like...
The technology used to create deepfake videos is evolving very rapidly. Is the technology used to detect them keeping pace and are there other approaches broadcast newsrooms can use to protect themselves?
Scalable Dynamic Software For Broadcasters is a free 88 page eBook containing a collection of 12 articles which give a detailed explanation of the principles, terminology and technology required to leverage microservices based, software only broadcast production infrastructure.
Traditional monolithic software applications were often difficult to maintain and upgrade. In part, this was due to the massive interdependencies within the code that required the entire application to be upgraded and restarted resulting in down-time that regularly created many…
A self-described “technologist” at heart, Louis Hernandez Jr. knows an emerging trend when he sees one and likes to ride the wave as long as possible. Trained by his father, a computer science teacher, with his formal undergraduate and MBA in …
The criticality of service assurance in OTT services is evolving quickly as audiences grow and large broadcasters double-down on their streaming strategies.