Scalable Dynamic Software For Broadcasters: Part 11 - Container Security

As broadcasters continue to transition to IP, the challenges of security are never that far away. But security isn’t the responsibility of one person or department, it is the responsibility of everybody in the whole organization.

Taking the Zero Trust approach to security goes a long way to keeping systems secure. Instead of using the trusted network approach, where anybody within the network is trusted, Zero Trust assumes every transaction is a potential security breach and must be verified before the process can continue.

The traditional trusted network approach was fundamentally flawed as anybody who could gain access to the network was then trusted until they logged out again. This could have potentially massive implications for the broadcaster as a hostile actor could lie in wait within a network for days, weeks, or even months without being detected. They could silently gain access to a whole host of user credentials and data, often resulting in catastrophic consequences. Keeping them out was difficult and detecting them once they were in the network was almost impossible.

Microservice and container architectures use the internet model for data exchange, control, and issuing of commands. This not only provides incredible flexibility, as anybody with an internet browser can operate the system, but it also allows broadcasters to take advantage of the innovation in other industries and operate in datacenter and cloud environments, thus delivering exceptional resilience and scalability. However, one of the downsides of using this technology is that broadcasters must take the same security measures as high-end enterprise datacenters.

Using the Zero Trust model requires validation for every user at every data or control point within a system. Once a user is logged in, the validation doesn’t stop, it continues. For example, if a user creates a transcode job, their login credentials are first verified by an Active Directory type service, which in turn request an OAUTH2 token from the OAUTH2 server, and when the user executes the process, the appropriate microservice receives the validated OAUTH2 token and compares it to the OAUTH2 validation server, if it’s authorized then the microservice can continue.

OAUTH2 tokens are critical to the operation of microservice architectures. They are not only validated against the user login credentials, but also have time limit and user rights access associated with them. If a system administrator suspects a hostile actor has broken into the broadcast infrastructure, they can disable the token. Any other access attempts will result in the OAUTH2 validation server from denying access by the process or storage.

Each memory access or process execution must have an authorized OAUTH2 token associated with it. Validating this token will significantly reduce the risk of a hostile actor breaking into the system. 

Figure 1 – The OAUTH2 validation server issues a token that is then used by every process in the microservice architecture to validate access to the data.

Figure 1 – The OAUTH2 validation server issues a token that is then used by every process in the microservice architecture to validate access to the data.

Vulnerabilities

Although we talk much of IT vulnerabilities and security issues, it’s worth remembering that broadcasters also have their security weak points, even with SDI and AES. It’s just that they were much better contained due to the localized nature of the television station. But that resulted in a static infrastructure that was difficult to scale and lacked flexibility.

Areas of interest for microservice architecture security can be grouped into the following:

  1. Container image software
  2. Interactions between the container, host operating system, and other hosts
  3. Host operating system
  4. Networking and repositories
  5. Runtime environments

Item 2 is generally taken care of by OATH2 authentication, and items 3, 4, and 5 should be taken into consideration as a matter of good practice for any system administrator. However, item 1 needs much more consideration as it has implications for procurement and software provenance.

A container encapsulates one or more microservices with the appropriate libraries and operating system dependencies. And these in themselves can be a source of security vulnerability. 

If the microservice has been written in-house, it is the responsibility of the software or DevOps team to thoroughly test the software for vulnerabilities. This includes any code dependencies such as libraries that may be provided by an outside source. Furthermore, as a matter of good practice, the developers must regularly check vulnerability catalogs such as the Cybersecurity and Infrastructure Security Agency (CISA – cisw.gov). CISA is a database of all the known security vulnerabilities provided by the US government and aims to understand, manage, and reduce the risk to cyber and physical infrastructure.

Guidance for vulnerability management is provided by the UK’s National Cyber Security Center at ncsc.gov.uk. The NCSC advises that a vulnerability plan must be in place so that DevOps and IT know what vulnerabilities are present within their infrastructure and a plan is needed to keep this updated.

Not only do the vulnerabilities need to be understood but meticulous processes must be in place to make sure the necessary patches are applied, deployed, and documented accordingly.

Although all IT and DevOps departments must be aware of these agencies and have the appropriate plans in place, if the microservice components or architecture is being provided by a third-party supplier, then the extent of their responsibility must be understood. And much of this comes down to understanding the provenance of the code and the processes the vendors have undertaken to make their software as secure as possible.

Recovery

Unfortunately, systems do occasionally go wrong and no matter how hard a broadcaster may try, it is an unfortunate fact of life that their high profile status attracts some of the best cybercriminals in the world. Consequently, a recovery plan must also be put in place.

Using microservice architectures makes this much easier than many of the monolithic software systems of the past would allow. The very nature of microservices means that there is a constant deployment cycle going on which gives ample opportunity for the applications to be checked and verified against the cybersecurity databases. This creates a culture of security first, which must be the mantra for any broadcaster operating an IP infrastructure.

Storage can be easily backed up in the cloud as broadcasters are only limited by the amount of money they want to spend. Whether to provide incremental or full backups is not only a matter for budgets but also security. If a file is sync’ed from an on-prem storage system to the cloud, and if the on-prem storage becomes compromised then the act of copying the file to the cloud will also compromise the cloud copy. Using incremental backups often helps alleviate this as there will be a historical copy of the file that has not been compromised.

SELinux

The server’s operating system security can be improved using SELinux (Security-Enhanced Linux) by allowing administrators to have more control over which users have access to the system. It was originally developed by the NSA (United States National Security Agency) to provide patches to the Linux kernel known as LSMs (Linux Security Modules). It was released as open source in 2000 and integrated into the Linux kernel in 2003.

SELinux operates by defining access controls for applications, files, and processes within the server. A set of rules, known as security policies, authorize what can and cannot be accessed. When an application accesses a file or ethernet port for example, SELinux checks the AVC (Access Vector Cache) to determine if the access has the necessary permissions. If no AVC rule exists, then SELinux sends a request to the security server (such as an OAUTH2 server) to determine whether access is permissible.

The effectiveness of this security relies on the system administrators setting the relevant rules and testing it on a regular basis.

Configuring and operating SELinux is often a complex task but it adds significant protection to a system as part of a Zero Trust strategy. The days when every user can have superuser access because it’s more convenient for them are long gone. 

Part of a series supported by

You might also like...

Why AI Won’t Roll Out In Broadcasting As Quickly As You’d Think

We’ve all witnessed its phenomenal growth recently. The question is: how do we manage the process of adopting and adjusting to AI in the broadcasting industry? This article is more about our approach than specific examples of AI integration;…

Designing IP Broadcast Systems: Integrating Cloud Infrastructure

Connecting on-prem broadcast infrastructures to the public cloud leads to a hybrid system which requires reliable secure high value media exchange and delivery.

Video Quality: Part 1 - Video Quality Faces New Challenges In Generative AI Era

In this first in a new series about Video Quality, we look at how the continuing proliferation of User Generated Content has brought new challenges for video quality assurance, with AI in turn helping address some of them. But new…

Minimizing OTT Churn Rates Through Viewer Engagement

A D2C streaming service requires an understanding of satisfaction with the service – the quality of it, the ease of use, the style of use – which requires the right technology and a focused information-gathering approach.

Production Control Room Tools At NAB 2024

As we approach the 2024 NAB Show we discuss the increasing demands placed on production control rooms and their crew, and the technologies coming to market in this key area of live broadcast production.