Building Software Defined Infrastructure: Zero Tolerance Security

Software based systems bring immense flexibility but they also bring increased vulnerability and inevitable trade-offs between flexibility and security.

Security is an issue broadcasters continue to take extremely seriously, especially when we consider the amount of money involved in high value media rights, and broadcasters are especially vulnerable when holding program master’s for high valued content from the big movie production companies.

Modern fingerprinting methods allow media production companies to trace their media content from the point they delivered it to the broadcaster, to the point where it is made available to the viewer. If the media has been illegally copied at any point in the broadcasting chain, then there is a good chance the movie production companies will know where the compromise took place.

Not only do broadcasters have to protect their networks from hackers wishing to interrupt their transmissions, but they must also protect third party high value media to stop the big movie production companies being compromised.

Public Cloud computing must be one of the greatest advances in modern history. We can spin up any size of compute resource with as much memory and storage as we would ever need at datacenters all over the world. There are armies of network engineers and security experts constantly making the datacenters as secure as they can be, and the ability to create scripts that can replicate entire systems further increases their relevance in broadcast systems. However, with these great advances come the responsibilities of the broadcaster.

Public cloud service providers will only guarantee security to a limit that is defined by their own systems.  As computers are highly configurable and programable, we wouldn’t expect the service provider to protect against every security eventuality. It’s possible that they could do this, but the result would be a heavily constrained and restricted compute infrastructure, to the point where it would be quite useless for broadcasters. Building software defined infrastructures is not just a technical achievement but also demands high levels of security to prevent breaches and copying of high value media content.

Don’t Trust Anybody

Traditional IT systems relied on a protected zone that resembled a high wall to stop anybody from the outside from breaking into the network, and it also assumed anybody on the inside had gained access legally and was authorized to be there. This worked well until users started making greater demands for internet access and bring-your-own-devices (BYOD) and the perimeter fence started to have more holes than a Swiss cheese.

Security is difficult because in an ideal world we would make systems 100% secure. This can be achieved if we don’t allow anybody or anything access to the computer systems, especially if we physically hide them and switch off the power. Clearly this is an absurdity as the computer infrastructure would be useless. And that’s the concession we need to make; providing access to the computer system compromises its security. This is a constant fight IT and network engineers have with users: making a system more secure reduces its ease of use, but making it easier to use often compromises security. For example, making users change their passwords every time they logged in would make the system highly secure, however, there would soon be a revolt amongst those using it.

Zero trust security is a relatively new method of securing IT systems. It assumes that nobody, or anything, can be trusted. All accesses to the computers, network and storage must be validated against a database of access rights, and these are regularly reviewed or negated after a certain amount of time.

The perimeter wall method provides a great deal of access to anybody who has managed to breach it, even worse, they can lie in wait undetected in the recesses of the system. However, the zero-trust method constantly verifies a user’s access requests against the central database.

Having to constantly validate every database, memory, storage or server access may sound tiresome, but it does promote a safety-first philosophy for anybody designing, building, or using the broadcast infrastructure. The zero-trust method further improves the security of microservices within software defined infrastructures as it validates access to the media in the storage, as well as the actual API call.

One of the challenges of implementing zero-trust is that it must be instigated at the very beginning of the hardware infrastructure design. Furthermore, any apps or microservices working within this environment must also be written from the ground-up to include validation as part of their core function. A microservice that does not provide this type of validation may not even be able to run, and even if it did, it wouldn’t be able to access the storage or database systems.

Deep Packet Inspection

Broadcast APIs generally have two different signal flows, one that interfaces with the API control, and one which is responsible for transferring the video and audio streams to other microservice applications and storage. API calls must be fast and responsive, but once they’ve been called then they are only occasionally called again. Media flows are completely different and are more susceptible to latency and packet loss, especially as most are transferred using UDP to stop the negative effects of the variable latency caused by TCP.

As part of their security requirements, IT departments often require Deep Packet Inspection (DPI) for IP packets as they enter and leave networks or enter and leave servers. DPI software literally analyzes the payload of every IP datagram to check for viruses, malware and other nefarious intrusions such as denial of service attacks. This may well be considered a necessity in enterprise type networks, but it does have the potential to cause massive problems for broadcast infrastructures, especially when streaming video and audio.

Within the software defined infrastructure of a broadcast system, most of the packets containing video and audio will be unencrypted. The caveat to this being the point in a playback system where the packets are encrypted before being transmitted or streamed over the internet. This leaves them vulnerable to attacks like any other IP packet and so they may be subject to having their payloads inspected.

Video processing in servers is known to be very hungry for CPU processing power, even if CPU parallelization is employed. Also, the NICs are known bottle necks and if kernel bypass operations cannot be employed, then this provides a point of serious congestion resulting in massively delayed or even lost packets, something we do not want to experience in broadcast infrastructures. Often, the CPU cores are driven close to their limits to maximize video and audio throughput and if we add to the mix the need to inspect every payload in every video and audio packet, then this only exasperates the potential for packet latency and delay.

Kernel bypass is a technique to copy packets from the NIC directly into the servers’ memory. This negates the need for the processor to monitor the NIC and copy the packets to and from memory as required, thus making a great saving in terms of CPU resource resulting in improved latency. However, there is no guarantee that the microservices infrastructure will facilitate kernel bypass and the added overhead of the container system could cause the server to overload if deep packet inspection is adopted.

The DPI requirement can be a bit frustrating for broadcasters as they want to maximize the server throughput but, in this scenario, a good deal of CPU resource could be consumed by these security measures. A deep and meaningful discussion then manifests itself with the IT department to find some sort of compromise. It might be that they relax the DPI requirement for that part of the infrastructure, so all is good. Or it might be that the company policy demands that the DPI is kept in place, in which case the broadcaster will have to reduce the server’s throughput.

This can become a bit of a chicken and egg situation, especially when we consider the overhead of the containers and microservices. However, one solution to this is to make sure adequate monitoring is employed so that packet delay and dropout can be better monitored and the system resources changed accordingly. Or at least be able to demonstrate to the IT department the potential negative effects of their DPI software.

All things in engineering are a compromise and software defined infrastructures, along with their security measures are no different. Here, operational requirements must be balanced against security, a challenge that will usually need the CEO of the broadcast facility to resolve.

Part of a series supported by

You might also like...

Live Sports Production: Exploring The Evolving OB

The first of our three articles is focused on comparing what technology is required in OBs and other venue systems to support the various approaches to live sports production.

Cloud Compute Infrastructure At IBC 2025

In celebration of the 2025 IBC Show, this article focuses on the key theme of cloud compute infrastructure and what exhibitors at the show are doing in this key area of technological enablement.

Monitoring & Compliance In Broadcast: Real-time Local Network Monitoring

With many production systems now a hybrid of SDI & IP networking, monitoring becomes a blend of the old and the new within a software controlled environment.

Broadcast Audio Technology At IBC 2025

In celebration of the 2025 IBC Show, this article gathers the news about what the vendors exhibiting on the show floor for the acquisition, production and delivery of pristine, immersive audio.

Broadcast Standards: The NMOS Standards Deep Dive

An introduction to and summary of the NMOS standards. Although originally created to supplement ST 2110 architectures, NMOS standards are becoming increasingly relevant in practical implementations of software defined infrastructure.