Making Cloud Systems Secure - Part 1

Security for cloud and internet systems is playing an ever-increasing role in broadcast infrastructures. High value media assets and communication channels to broad audiences are at risk so it is reasonable to assume that unidentified hostile actors are lurking in every corner.


This article was first published as part of Essential Guide: Making Cloud Systems Secure - download the complete Essential Guide HERE.

The good news is that there is much a broadcaster can do to help protect themselves from attack. Although no system can ever be completely secure, it’s worth remembering that even traditional SDI and AES broadcast facilities had their vulnerabilities. They were just different, and broadcasters assumed they knew where they were, but they often didn’t.

Managing and understanding risk is key for maintaining security. Furthermore, a vast array of detection and analysis tools are available to help broadcasters understand network and infrastructure vulnerabilities, especially as the IT industry has been working at finding solutions to these challenges for many years.

Delivering effective security not only relies on our technical understanding but it also embraces the attitudes of users and how they approach security. To be effective, security must encompass a positive and productive mindset that is promoted and encouraged from the CEO so that it manifests itself as a culture throughout the broadcast facility.

As one of the biggest vulnerabilities to any IT system is human error, effective cloud security is a way of life that must be encouraged. And systems need to be designed to have the right processes, features, and patches in place from the beginning and then throughout the lifetime of the system.

Problem to Solve

To deliver cloud security, broadcasters need to achieve the following:

  1. Protect data storage, processing systems, and networks from data theft.
  2. Develop a data recovery plan in case data is lost or corrupted.
  3. Stop human negligence so that data cannot be compromised.
  4. Ringfence the impact of data loss or a compromise of the system.

Although much of our approach to cloud security revolves around stopping malicious actors from penetrating the network in the first place, we must also be mindful of the need to back up data and be able to recover from data loss or corruption.

Intuitively, we may want to treat data recovery separately from stopping intrusion, but in many instances, there is a great deal of overlap. Furthermore, data loss or corruption may not be a consequence of a malicious act, but instead be a simple mistake such as a user deleting a master media file. The fact that users shouldn’t be able to make these mistakes falls into the discipline of user access rights but restoring the media asset should be a key part of the data recovery plan.

Restoring a file is important but it’s only a part of the equation. Equally important is isolating a hacker’s ability to disrupt an ongoing service. This is where system redundancy and the ability to fail over to a back up system are important.

Cloud systems have the potential to make data recovery much easier due to the multitude of options available for storage. High speed near-line storage can be archived to off-line storage, which is often cheaper but slower. However, these storage systems also need to be protected from malicious attacks. Even if a broadcaster archived their cloud storage to on-prem, the two systems are intrinsically linked, and adequate security must be maintained between the two.

Again, we need to address protecting active processing, not just storage.

Outdated Approaches

Traditional methods of IT security used the perimeter wall approach. That is, the access points to the network were heavily guarded so that if a hostile actor tried to gain access, they could only do so through a limited number of points that could be protected. One example of this is the firewall on the internet router.

Firewalls and intrusion detection systems would be placed at the internet connection point to the ISP and any malicious access could be detected and stopped. But the fundamental challenge with this strategy is that it relies on knowledge of the attack pattern which can only be gained if another organization had been subject to the attack, detected it, noted the pattern, and then shared it. That said, this is still a very important part of infrastructure protection.

The challenges of the perimeter method have been further compounded in recent years as users become more reliant on the internet. Bring your own devices and the reliance on cloud systems has further exacerbated the inadequacies of this approach. Once the attacker is in the perimeter wall, they can cause all kinds of havoc, sometimes lying-in wait for weeks or even months before releasing their attack.

Figure 1 – Traditional perimeter approaches to network and resource security are flawed in modern cloud and IT infrastructures as they lack intra-zone traffic inspection, lack flexibility, and have single points of failure.

Figure 1 – Traditional perimeter approaches to network and resource security are flawed in modern cloud and IT infrastructures as they lack intra-zone traffic inspection, lack flexibility, and have single points of failure.

As IT and datacenter infrastructures increase in complexity, the need to improve our approach to security has become clear. This isn’t just a matter of improving antivirus software or increasing our ability to detect rogue traffic in a network (although these are clearly very important) but is also about adopting a new mindset that makes the assumption that nobody and no-transaction can be trusted.

Encryption

Stored data is often encrypted so that if an unauthorized user does access the storage, then they will be unable to use the data. They will certainly be able to access it, but they won’t be able to decode and view it.

Data is not only vulnerable when it is stored, but also when it is in transit between processes or if the entire platform is under attack. Anybody eavesdropping on the network will be able to gain a whole host of information about the infrastructure leading to a potential attack.

Exchanges between cloud software including microservices often use the RESTful API method. As public and private clouds are connected via the internet then their communication protocols must comply with internet standards.

The REST (Representational State Transfer) protocol provides methods and standards for computer systems on the internet to allow them to exchange data and therefore communicate with each other. Although this is a massively versatile system, it uses a protocol based on plain text, meaning without additional measures it is highly insecure. Anybody with a network sniffer will be able to view the messages and gain a great deal of knowledge about the sending and receiving networks. And this is especially worrying as the communications are being freely exchanged across the open internet.

This leads to another potential issue, and that is one of man-in-the-middle attacks as any end point using the RESTful API can be impersonated by a malicious actor. This scenario mainly occurs as end point validation was not built into the original web HTTP (Hyper Text Transfer Protocol) specifications. A malicious actor could intercept the traffic on the network and change the IP address of the destination to their own server and then force all the traffic to it, and once they’ve done that, they can easily harvest the user’s credentials.

To alleviate both these challenges, a method of validating the API endpoints was developed using public-private key encryption. This resulted in the adoption of HTTPS (Hyper Text Transfer Protocol Secure) which uses TLS (Transport Layer Security) as its underlying security method. HTTPS solves three challenges: confidentiality, authenticity, and integrity. Confidentiality stops anybody snooping on the connection as it is encrypted so that all sensitive data is obscured. Authenticity guarantees the sender and receiver are who they say they are (thus stopping man-in-the-middle attacks). And Integrity guarantees that the data exchanged between the endpoints hasn’t been tampered with or modified.

Supported by

You might also like...

Why AI Won’t Roll Out In Broadcasting As Quickly As You’d Think

We’ve all witnessed its phenomenal growth recently. The question is: how do we manage the process of adopting and adjusting to AI in the broadcasting industry? This article is more about our approach than specific examples of AI integration;…

Designing IP Broadcast Systems: Integrating Cloud Infrastructure

Connecting on-prem broadcast infrastructures to the public cloud leads to a hybrid system which requires reliable secure high value media exchange and delivery.

Video Quality: Part 1 - Video Quality Faces New Challenges In Generative AI Era

In this first in a new series about Video Quality, we look at how the continuing proliferation of User Generated Content has brought new challenges for video quality assurance, with AI in turn helping address some of them. But new…

Minimizing OTT Churn Rates Through Viewer Engagement

A D2C streaming service requires an understanding of satisfaction with the service – the quality of it, the ease of use, the style of use – which requires the right technology and a focused information-gathering approach.

Production Control Room Tools At NAB 2024

As we approach the 2024 NAB Show we discuss the increasing demands placed on production control rooms and their crew, and the technologies coming to market in this key area of live broadcast production.