Making Cloud Systems Secure - Part 1

Security for cloud and internet systems is playing an ever-increasing role in broadcast infrastructures. High value media assets and communication channels to broad audiences are at risk so it is reasonable to assume that unidentified hostile actors are lurking in every corner.


This article was first published as part of Essential Guide: Making Cloud Systems Secure - download the complete Essential Guide HERE.

The good news is that there is much a broadcaster can do to help protect themselves from attack. Although no system can ever be completely secure, it’s worth remembering that even traditional SDI and AES broadcast facilities had their vulnerabilities. They were just different, and broadcasters assumed they knew where they were, but they often didn’t.

Managing and understanding risk is key for maintaining security. Furthermore, a vast array of detection and analysis tools are available to help broadcasters understand network and infrastructure vulnerabilities, especially as the IT industry has been working at finding solutions to these challenges for many years.

Delivering effective security not only relies on our technical understanding but it also embraces the attitudes of users and how they approach security. To be effective, security must encompass a positive and productive mindset that is promoted and encouraged from the CEO so that it manifests itself as a culture throughout the broadcast facility.

As one of the biggest vulnerabilities to any IT system is human error, effective cloud security is a way of life that must be encouraged. And systems need to be designed to have the right processes, features, and patches in place from the beginning and then throughout the lifetime of the system.

Problem to Solve

To deliver cloud security, broadcasters need to achieve the following:

  1. Protect data storage, processing systems, and networks from data theft.
  2. Develop a data recovery plan in case data is lost or corrupted.
  3. Stop human negligence so that data cannot be compromised.
  4. Ringfence the impact of data loss or a compromise of the system.

Although much of our approach to cloud security revolves around stopping malicious actors from penetrating the network in the first place, we must also be mindful of the need to back up data and be able to recover from data loss or corruption.

Intuitively, we may want to treat data recovery separately from stopping intrusion, but in many instances, there is a great deal of overlap. Furthermore, data loss or corruption may not be a consequence of a malicious act, but instead be a simple mistake such as a user deleting a master media file. The fact that users shouldn’t be able to make these mistakes falls into the discipline of user access rights but restoring the media asset should be a key part of the data recovery plan.

Restoring a file is important but it’s only a part of the equation. Equally important is isolating a hacker’s ability to disrupt an ongoing service. This is where system redundancy and the ability to fail over to a back up system are important.

Cloud systems have the potential to make data recovery much easier due to the multitude of options available for storage. High speed near-line storage can be archived to off-line storage, which is often cheaper but slower. However, these storage systems also need to be protected from malicious attacks. Even if a broadcaster archived their cloud storage to on-prem, the two systems are intrinsically linked, and adequate security must be maintained between the two.

Again, we need to address protecting active processing, not just storage.

Outdated Approaches

Traditional methods of IT security used the perimeter wall approach. That is, the access points to the network were heavily guarded so that if a hostile actor tried to gain access, they could only do so through a limited number of points that could be protected. One example of this is the firewall on the internet router.

Firewalls and intrusion detection systems would be placed at the internet connection point to the ISP and any malicious access could be detected and stopped. But the fundamental challenge with this strategy is that it relies on knowledge of the attack pattern which can only be gained if another organization had been subject to the attack, detected it, noted the pattern, and then shared it. That said, this is still a very important part of infrastructure protection.

The challenges of the perimeter method have been further compounded in recent years as users become more reliant on the internet. Bring your own devices and the reliance on cloud systems has further exacerbated the inadequacies of this approach. Once the attacker is in the perimeter wall, they can cause all kinds of havoc, sometimes lying-in wait for weeks or even months before releasing their attack.

Figure 1 – Traditional perimeter approaches to network and resource security are flawed in modern cloud and IT infrastructures as they lack intra-zone traffic inspection, lack flexibility, and have single points of failure.

Figure 1 – Traditional perimeter approaches to network and resource security are flawed in modern cloud and IT infrastructures as they lack intra-zone traffic inspection, lack flexibility, and have single points of failure.

As IT and datacenter infrastructures increase in complexity, the need to improve our approach to security has become clear. This isn’t just a matter of improving antivirus software or increasing our ability to detect rogue traffic in a network (although these are clearly very important) but is also about adopting a new mindset that makes the assumption that nobody and no-transaction can be trusted.

Encryption

Stored data is often encrypted so that if an unauthorized user does access the storage, then they will be unable to use the data. They will certainly be able to access it, but they won’t be able to decode and view it.

Data is not only vulnerable when it is stored, but also when it is in transit between processes or if the entire platform is under attack. Anybody eavesdropping on the network will be able to gain a whole host of information about the infrastructure leading to a potential attack.

Exchanges between cloud software including microservices often use the RESTful API method. As public and private clouds are connected via the internet then their communication protocols must comply with internet standards.

The REST (Representational State Transfer) protocol provides methods and standards for computer systems on the internet to allow them to exchange data and therefore communicate with each other. Although this is a massively versatile system, it uses a protocol based on plain text, meaning without additional measures it is highly insecure. Anybody with a network sniffer will be able to view the messages and gain a great deal of knowledge about the sending and receiving networks. And this is especially worrying as the communications are being freely exchanged across the open internet.

This leads to another potential issue, and that is one of man-in-the-middle attacks as any end point using the RESTful API can be impersonated by a malicious actor. This scenario mainly occurs as end point validation was not built into the original web HTTP (Hyper Text Transfer Protocol) specifications. A malicious actor could intercept the traffic on the network and change the IP address of the destination to their own server and then force all the traffic to it, and once they’ve done that, they can easily harvest the user’s credentials.

To alleviate both these challenges, a method of validating the API endpoints was developed using public-private key encryption. This resulted in the adoption of HTTPS (Hyper Text Transfer Protocol Secure) which uses TLS (Transport Layer Security) as its underlying security method. HTTPS solves three challenges: confidentiality, authenticity, and integrity. Confidentiality stops anybody snooping on the connection as it is encrypted so that all sensitive data is obscured. Authenticity guarantees the sender and receiver are who they say they are (thus stopping man-in-the-middle attacks). And Integrity guarantees that the data exchanged between the endpoints hasn’t been tampered with or modified.

Supported by

You might also like...

NAB Show 2024 BEIT Sessions Part 2: New Broadcast Technologies

The most tightly focused and fresh technical information for TV engineers at the NAB Show will be analyzed, discussed, and explained during the four days of BEIT sessions. It’s the best opportunity on Earth to learn from and question i…

Standards: Part 6 - About The ISO 14496 – MPEG-4 Standard

This article describes the various parts of the MPEG-4 standard and discusses how it is much more than a video codec. MPEG-4 describes a sophisticated interactive multimedia platform for deployment on digital TV and the Internet.

The Big Guide To OTT: Part 9 - Quality Of Experience (QoE)

Part 9 of The Big Guide To OTT features a pair of in-depth articles which discuss how a data driven understanding of the consumer experience is vital and how poor quality streaming loses viewers.

Chris Brown Discusses The Themes Of The 2024 NAB Show

The Broadcast Bridge sat down with Chris Brown, executive vice president and managing director, NAB Global Connections and Events to discuss this year’s gathering April 13-17 (show floor open April 14-17) and how the industry looks to the show e…

Essential Guide: Next-Gen 5G Contribution

This Essential Guide explores the technology of 5G and its ongoing roll out. It discusses the technical reasons why 5G has become the new standard in roaming contribution, and explores the potential disruptive impact 5G and MEC could have on…