Broadcast Standards: Microservices Functionality, Routing, API’s & Analytics

Here we delve into the inner workings of microservices and how to deploy & manage them. We look at their pros and cons, the role of DevOps, Event Bus architecture, the role of API’s and the elevated need for logging.
Related content:
Microservices are an alternative to building large monolithic server-based applications. They are particularly well suited to large-scale enterprise problems but less useful to smaller organizations or for simpler tasks.
Deploy them in containers orchestrated with Kubernetes running on cloud computing platforms. This gives excellent performance. There are many providers of scalable cloud and container orchestration solutions. Construct your own private on or off premises cloud-based infrastructure or have it built for you if you prefer.
Since a microservice only has to implement a highly constrained and simple task, the development effort is significantly reduced and the code is less complex than a monolithic solution. They are also very lightweight processes. The container does not need to support everything that a monolithic server needs so the resource utilization is much more efficient. Many containers can coexist in a single server.
This is how the components fit together:
Each layer in this diagram represents multiple instances of that item. A monolithic application equates to a virtual CPU. A microservice is equivalent to a process.
Each microservice is developed independently. Often by different teams at the same time. Individual microservices can be retooled and redeployed with minimal service disruption provided their interfacing protocols are not altered.
Microservices evolved from the very early UNIX service listener implementations. These would fire off a process when a connection request arrived at a specific port in the IP socket interface. Refer back to an earlier series of articles on building monitoring tools for more details.
Service Oriented Architectures (SOA) advanced the art. They used the XML based SOAP protocol for request-response formatting.
When SOA architectures struggled to provide sufficient capacity for large enterprises, microservices emerged as a viable solution. Containerized processes running in cloud-compute based virtual CPUs are an ideal platform for hosting them.
Each microservice has its own private memory, optional database and process space. Microservices can use libraries of shared code managed by the DevOps Continuous Integration services.
Why Is DevOps So Important?
DevOps is a broad term to describe the collaboration between developers and systems administrators. We call this Continuous Integration and Continuous Delivery or Deployment (CI/CD).
It provides tools and techniques to facilitate the rapid and automated deployment of software changes. It is useful when developing monolithic systems with large teams and mandatory when deploying microservices. The scale of complexity involved in microservices demands the support of the DevOps discipline to avoid the developers being overwhelmed by the scale of the task.
Code and configuration changes are maintained in a repository with developers submitting only their completed and fully tested code to replace the previous version. It is vital that the developers only submit bug-free and feature complete working code. These changes will be deployed enterprise-wide automatically. A single misplaced character in the source code or config files can bring the entire enterprise down.
Here is an outline of the automated DevOps deployment process:
- Overnight the DevOps process is called to action.
- The automation runs a test on the code changes which allows it to accept or reject the changes. This requires additional developer effort to design and implement the tests.
- Everything is rebuilt by the DevOps process using the latest validated version of the code in the repository.
- Microservices can benefit from shared code which facilitates consistent behavior. This code is included as the executable apps are compiled and linked.
- The compiled executable code and config changes are deployed to the target containers and servers.
- After deployment, the orchestrator uses the latest version of the containers when it next runs up the services.
Rolling out microservice solutions without first getting your DevOps house in order is guaranteed to fail!
Container Orchestration
Managing many microservices as things scale up becomes increasingly complex as you add more. A Service-mesh running in Kubernetes will orchestrate the individual microservices providing these advantages:
- Reliability.
- Observability.
- Enhanced security.
- Decoupled communication.
- Data/control abstraction.
Alternatively, delegate all of the responsibility to a third party and use (for example) Google CloudRun to deploy your containers. There is no need to build the orchestration or the servers. Just deliver your containers to Google to run for you. Google CloudSQL provides relational database support in a similar way. Because they both exist within the Google context, integration with Google Gemini AI tools is also facilitated if you need it.
Since this is a highly competitive industry, the other cloud platform providers all offer similar tools and features. Microservices can be widely distributed. Deploy them on multiple platforms if necessary. This offers redundancy in case of platform outages but increases the complexity. Keeping everything on one provider platform is certainly simpler.
A lot of the traditional maintenance jobs needed to keep systems running don’t go away with microservices. Garbage collection is still required but done quite differently.
Issues with incomplete processes may leave unresolved microservices still running in an orphaned state. The orchestration manager should clean these up periodically. This is facilitated by configurable timeouts and knowing that any owning session has been torn down and purged.
What Can Microservices Do?
There is no concrete definition of when an application becomes small enough to be deemed a microservice. They are all just services. Some deliver a single measurement value; others could kick off an entire visual effect rendering process.
Arguably the ‘micro’ prefix is misleading because they are not always small. For example, these might be high-level tier 1 services:
- Handle user registration.
- Handle user login with persistent tokens being granted for subsequent access control.
- Search the repository for a list of matching assets.
- Upload an asset.
- Download an asset.
- Update the metadata for an asset.
- Convert a file.
- Extract audio from a video file.
- Embed audio into a file.
- Insert chapter marks.
- Use speech recognition to extract a transcript.
Internally tier 1 services might call on these tier 2 microservices:
- A user account store microservice underpins user account management to centralize the user accounts database.
- Set the status of a user account.
- List the contents of a directory.
- Asset management microservices to support the asset transactions.
- Vend a file.
- Replace a file.
- Delete a file.
- Rename a file.
- Trace a network route and log the results.
- Check that a remote network node is up and running.
At a more granular level, internal tier 3 services might be responsible for handling very small tasks:
- Fetch a measurement value.
- Fetch a database record.
- Fetch an array of database records.
- Start a background process.
- Halt a background process.
- Update a console tally indicator.
- Record a logging event for analytics.
Be careful not to over-engineer your microservice deployment. As you break a monolithic application into modular sections, the tier 1 services may be obvious. Tier 2 and 3 microservices might be better factored into library calls rather than separate microservices. These can be deployed as shared code libraries via your DevOps continuous integration processes.
Think about how data becomes partitioned and spread across multiple small databases rather than one single database. Work out how some of these problems can be solved with aggregating microservices to provide shared services to others.
Communication Between Microservices
Because each microservice only performs a single and very simple job, they need to collaborate to accomplish more complex tasks. Each microservice runs in its own private environment with a separate database. They cannot know anything about the internals of any other microservice but they can communicate with any of them to request information.
A monolithic solution could share the same database across the entire application. A microservice design needs to pass all of the contextual information needed by the request in every call to action.
Microservices communicate with one another by sending requests and acting on the response. They share some common behavioral attributes with Object Oriented Programming. The internal functionality is hidden from view. Resources and storage are entirely private unless they are exposed via a request-response transaction.
Several communications mechanisms are used between microservices:
- Message queues and brokers.
- Event streams (such as Amazon EventBridge).
- REpresentational State Transfer (REST) APIs.
Security Risks
Provided verification of the calling process adequately satisfies the required level of security, microservices should be resilient against some cyber-attacks that would penetrate a monolithic server. They require special attention to validate their calling client, perhaps with secure tokens. Refer to discussions on OAuth or JWT (JSON Web Tokens) for more information. It is also certain that there are other novel intrusion risks for microservices that would not affect monoliths. Security is at best a moving target and at worst an arms-race!
Using Message Queues & Brokers
Here is an example architectural design for message queue microservice communications:
In the illustration, the client apps connect to the API gateway via the web front end. The API Gateway load-balances and distributes calls to an appropriate microservice. Microservices might not need to support the API access when they only exist internally as a lower tier service.
Microservices communicate with one another via a message queue managed by the message broker. A message broker is sometimes described as a publish-subscribe mechanism and operates as a one-to-many distributor.
Messages are pushed onto the message queue and published to all of the subscribed listeners. The responsibility for determining whether the message is appropriate then falls on the receiving microservice to ignore or act on the message as it sees fit.
Using An Event Bus & Pipes
An Event Bus is a more elegant solution than a message broker for connecting many microservices together. This acts as a router for incoming events which are forwarded to target microservices.
A good example of this is the AWS EventBridge solution for managing communications between microservices. Microservices are registered with their configuration on the Event Bus as they are initialized. Incoming events are filtered according to configurable criteria. Then, Events are forwarded to zero or more targets by rules which determine how they are propagated.

Pipes are also quite useful as entry points for event busses. This example is very simple and illustrates a one-to-many scenario similar to a message broker but with more fine-grained filtering and propagation rules.
An Event Bus is a many-to-many approach where the message broker is a one-to-many solution. Events can be constrained to a one-to-one use-case by using Event Pipes.
Quite complex event routing scenarios can be designed with an Event Bus.
RESTful API Support
REpresentational State Transfer (REST) is a set of constraints applied to an API that mandates that all transactions are independent of each other and that no stateful information is retained from one transaction to the next.
Statefulness is maintained by the calling client (parent) process. Any necessary state information is passed in with every transaction request. The target microservice does not maintain any state information from its caller between transactions. As a client in its own right, it can maintain state information on behalf of other microservices that it calls as downstream ‘children’. As the microservice finishes and returns its response to the caller, any state information relating specifically to that call must be purged. This satisfies the requirements for a RESTful API.
Managing statefulness like this allows massive scaling across multiple servers which is much harder to do when the session state is managed on the server side.
TP Request Formatting
Microservice API connections use the HTTP protocol which is well understood and simple to implement. The HTTP protocol is easily handled by firewalls so microservices can communicate with other networks when necessary. Manage and secure this connection to external networks very carefully to block unwanted intrusion attempts.
The classic HTTP interface between a web browser and a server is described as a request-response loop. The time taken between the initiation of a request to receiving the entire response payload is critical to maintaining performance. Turnaround time of a few milliseconds at most are the goal.
The structure of requests and responses is similar and based on the original design for Internet mail services that was published in RFC 822. This describes the entire transaction as a header and body delivered as a single block of data. They are separated by a blank empty line:
The header contains individual lines that are formatted as name-value pairs:
A header record is followed by a newline (\n) character to separate it from the next one. A double newline (\n\n) character separates the entire header block from the body. Headers must not wraparound with embedded line breaks of any kind. This is permitted in mail messages but should happen in HTTP.
The header name should be composed without any control characters or spaces and preferably only use the ASCII (7-bit) printable character set. To ensure reliable processing downstream, use only upper
and lower case alpha numeric (A-Z, a-z, 0-9) and dash characters.
The colon (:) character separates the header name from the value. An optional space after the colon is permitted. Leading and trailing whitespace is removed from the value before it is parsed.
Embedded colons (:) within the value payload must be escaped to avoid parsing errors. Use any of these examples consistently and perform a string replace to restore the colon character once the value payload has been extracted from the header record and before it is parsed:
- %3A
- :
- :
The HTTP request/response content uses one of these payload formats for the body block to transport information between microservices:
- XML - A data exchange format.
- HTML - Web page markup, possibly containing micro-formats.
- JSON - JavaScript Object Notation for serialized object representation.
- TEXT - Plaintext is an implicit sub-set of HTML with no embedded markup although it is rarely listed as an option.
Implementing Re-entrant Handlers
The receiving code handing each transaction must be re-entrant. Re-entrant code is well suited to the asynchronous nature of Event handlers delivered through an Event Bus. Events are triggered at unpredictable times and in any order. They also need to run concurrently with other instances of the same handler.
Hard-wiring transaction data and global or static local variables are prohibited. Any transaction specific information must be derived solely from the state information passed via the API. Then, the Re-entrant code can be called multiple times without any interaction between the various instances.
Global variables are useful for holding constant values that are set up at initialization and shared by all instances of the handlers.
Static local variables would be useful for holding API keys and tokens for accessing downstream services but only where they can be legally shared by many client sessions.
Maintaining Internal Service State
Microservices are permitted to maintain persistent cached service-level stateful information between calls without breaking the RESTful API rules.
For example, if a microservice lives on after a call is completed, a database connection does not need not be torn down between calls. The connection itself is not session dependent. It can be shared without leaking session related information to other callers. Performance improves significantly if the connection is maintained.
Likewise, API keys for access to downstream child microservices that are not transaction specific can be cached for reuse if that is appropriate. This is allowed on the basis that this microservice is a client of the downstream ‘child’ microservices it invokes so this is an appropriate place to maintain state for them. They must not be passed upwards to any callers as that would break the RESTful nature of the API.
Potential Latency Issues
Spinning up an entire virtual server can take some time. Between 30 and 120 seconds or more perhaps. Because containers are processes within an already running virtual CPU, they start up more quickly. Processes start in just a few milliseconds. Threads start even more quickly than processes but might limit the scalability.
In a monolithic application, inter-process communication and shared access to data in memory is simple and efficient. Microservices require network connections even within locally hosted containers and this introduces some latency. The table below broadly ranks various communications methods by latency although there is some overlap in timings and the exact order is debatable.
Finding the right balance between the scope of a larger microservice as opposed to a total deconstruction into thousands of small parts is where the architectural skill of the designer is important. Multiple component services chained together increases latency.
Rank for speed | Communication method |
---|---|
1 | In process comms within a single monolithic application is fastest. |
2 | Inter-process comms requires context switches and forwarding of messages within a single host CPU. |
3 | Direct socket connections provide the fastest connections between processes in different host CPUs. |
4 | HTTP request-response loops are optimized for rapid turn-around times but still involve a network that can add latency. |
5 | Remote procedure calls are an extension of inter-process comms requiring a network pipe and security arbitration between two host CPUs. |
6 | Event bridges direct traffic only where it is needed and operate more asynchronously than a message queue. |
7 | Messaging services require a message queue and broker to work. Queues by definition must introduce some latency in high traffic situations. |
Think carefully about the design of your microservices to benefit from performance impacts and factor the statefulness accordingly.
Logging, Analytics & Debugging
Logging is much more important in the context of microservices than it ever was with monolithic applications. Because the microservices are running in the background as headless processes, they cannot be directly observed.
Adding logging stubs to record activity is very useful. The logs can be watched with the tail -f command in a terminal console window to follow the background activity. Pipe the output through grep filters to thin out the amount of information you need.
Gathering statistics and analyzing them will indicate bottlenecks in your design. Measure the way each microservice consumes resources and count how many times and when it is invoked. This will give you valuable insights into streamlining the design.
Centralize the logging with an additional microservice. This can gather and aggregate the performance measurements from many separate and dispersed microservices. Logging output could be redirected to a centralized syslog for subsequent analysis.
Think carefully about your log format and develop some standardized structures to facilitate the analysis. Look at how Apache maintains the access, errors and referrer logs as an example. Design your log format around the processing algorithm to simplify its design.
Monoliths Vs. Microservices
Monolithic server-based solutions have been around for a long time. There is always a temptation to throw out all the old tech because surely the newer solution is going to solve all the shortcomings of dong it the old way! Be wary of this mindset. Microservices are very good for some requirements, monolithic servers are also still good solutions for others. They each have a part to play.
The table below lays out the pros and cons for each methodology.
Feature | Monoliths | Microservices |
---|---|---|
Scalability | Requires multiple processes to be managed within a single memory model. | Each service exists in its own compact virtual environment which is very small, independent and private. |
Organization size | Better for small organizations with limited developer resources. | Good for large enterprises with many developers working in parallel. |
Frequency of updates | Suitable when updates are few and infrequent. | Ideal when updates are many and frequent. |
DevOps | Can often manage without. | Mandatory. DevOps is vital to deploying continuous integration and deployment changes. |
Lock-in | The entire server may be locked-in to a single technology which is hard to decouple. | Lock-in (such as it ever happens) is on a container-by-container basis. Can use many different solutions. Can be decoupled and replaced easily. |
Sys admin | Straightforward and predictable. | Requires careful automated orchestration of possibly many thousands of individual containers. |
Messaging latency | Inter-process communication is entirely local. Latency is low. | Service to service communications always uses the network since there is no single server to share processes or memory. Network saturation can increase latency. |
Security | Fire-walling is applied around the boundary of the entire server. A single penetration can attack the whole server. | Protections can be implemented at the individual service level and messages can be authenticated to avoid attacks being propagated. |
Fault tolerance | A serious fault can bring down the entire server. | A fault in a container might only take that service offline. Although all instances of that service might be affected. The application as a whole may cope with the outage more effectively. |
Testing | Developing small applications may be simpler and testing end-to-end is much less complex. IDE debugging is practical. | Applications are composed of many moving parts. End-to-end testing is complex. Testing individual items is less complex. IDE debugging without a test harness is impractical. |
Deployment | A single large application needs to be rolled out at once. | Incremental rollout of small components with more frequent changes. |
Maintenance | Easier to maintain one simple application. | Requires DevOps and container orchestration to be carefully configured. |
Flexibility | Everything is tightly coupled and harder to scale. | Infinitely flexible as new components can be added and scaled with container orchestration. |
Local data | All data is held locally and accessible to the entire application. Only a single copy is maintained. | Processing is distributed so data must be delivered when needed. This also implies data is potentially duplicated in many places and fragmented. |
Databases | Everything is based on a single database that is available to all parts of the application. | Databases are partitioned on a per microservice basis and only contain what that service needs. The data is significantly fragmented as a result. This may lead to duplication. Synchronization problems are avoided by additional message passing to exchange data with other services. That has a performance hit as a consequence. |
Search | Searching a single database is easy and reliable. | Searching a fragmented and distributed database needs careful design. Potentially the whole database could be owned by a single microservice which vends results to its clients. |
State management | Implicitly supported by a monolithic design. | Managed by passing additional information in each and every RESTful call. State is managed by the calling client whether that is a user interface or another microservice. |
A Potential Approach To Migration
A monolithic solution may work perfectly well. Breaking it down and retooling it as a collection of microservices may not be worth the trouble.
A hybrid approach is also perfectly viable. Anecdotal evidence from case studies have suggested that a microservice approach can become so complex to manage that there is little developer resource left available to create new containers.
Combine the best aspects of monolithic and microservice architectures. Applications can still be dismantled into separate modular units that can be developed and deployed individually within a monolithic server. User facing processes can be facilitated with microservice concepts. Service granularity can be managed between the two contexts using Service Oriented Architectures (SOA).
Some determining factors are application complexity and the available development resources. This may suggest a monolithic approach is better. Future scalability plans and how often updates need to be deployed will steer you towards microservices. Carefully migrating towards a modular architecture might suggest a hybrid solution is the optimum solution.
Migrating from a monolithic approach to a microservice containerized architecture does not need to be done in one big step. Carefully deploying services one at a time to replace component parts of an application is a good approach. The monolithic application becomes hybrid at first and gradually evolves into an entirely microservice-based solution. There may even be a point at which the hybrid nature becomes optimal and no further migration is necessary.
It is not always clear which is best from the outset and some organizations have reverted back to a monolithic design after failing to get a microservice system working as expected.
Don’t Believe Everything You Hear
Beware of some of the hyperbole being bandied about. One commentator suggested that microservices offered disruption-free deployment. I think that is an unrealistic expectation.
It is not difficult to imagine a container being deployed via DevOps with a completely non-functioning service that is fundamental to the entire system working properly. That is not going to be at all disruption-free and would be equally problematic with monolithic or microservice-based architectures!
Rolling the container changes back again should be much easier and quicker with a well-orchestrated microservice system provided you carefully maintain the historical versions of your containers. This should also be true for monolithic designs by rolling back a code change in the repository and recompiling. It would just take a little bit longer perhaps.
The possibility of disruption-free deployment depends on the robustness of your change management and back out process. It is not an attribute of the underlying architectural design.
The responsibilities for careful planning, change control and most importantly properly updated documentation do not go away with microservices. In fact, they are more important than they ever were.
Conclusion
There is no doubt that microservices are very popular and extremely useful. Surveys suggest between 75% and 85% of large enterprises have adopted this approach.
A lot of the Fear, Uncertainty and Dread (FUD) is due to the significant work involved in migrating to this new architectural model. Several large organizations have described their process in case studies and one common factor is that it will take time. Think in terms of potentially several years of careful and diligent work to complete the transition of a big monolith to microservices. Most importantly, don’t rush the process. It is a marathon and not a 100 yard dash.
Supported by
You might also like...
Live Sports Production: Part 3 – Evolving OB Infrastructure
Welcome to Part 3 of ‘Live Sports Production’ - This multi-part series uses a round table style format to explore the technology of live sports production with some of the industry’s leading broadcast engineers. It is a fascinating insight into w…
Monitoring & Compliance In Broadcast: Part 3 - Production Systems
‘Monitoring & Compliance In Broadcast’ explores how exemplary content production and delivery standards are maintained and legal obligations are met. The series includes four Themed Content Collections, each of which tackles a different area of the media supply chain. Part 3 con…
IP Monitoring & Diagnostics With Command Line Tools: Part 8 - Caching The Results
Storing monitoring outcomes in temporary cache containers separates the observation and diagnostic processes so they can run independently of the centralised marshalling and reporting process.
Broadcast Standards – Cloud Compute Infrastructure – Part 2
Welcome to Part 2 of ‘Broadcast Standards – Cloud Compute Infrastructure’. This collection of articles builds on the huge foundations of the enormously popular ‘Broadcast Standards - The Book’ by Cliff Wootton. As we progress on another year long epic editorial journey, Cliff app…
IP Monitoring & Diagnostics With Command Line Tools: Part 7 - Remote Agents
How to run diagnostic processes in each machine and call them remotely from a centralised system that can marshal the results from many other networked systems. Remote agents act on behalf of that central system and pass results back to…