As users working from home are no longer limited to their working environment by the concept of a physical location, and infrastructures are moving more and more to the cloud-hybrid approach, the outdated concept of perimeter security is moving aside for zero-trust security.
Perimeter strategies are the traditional method of maintaining security for IT infrastructures. They can be thought of as a large medieval castle with high walls and an entry point over a moat keeping out any attackers. This leads to two fundamental assumptions: any user authenticated within the corporate network is safe, and once authenticated then the user will behave responsibly.
Within the tightly managed corporate infrastructure these assumptions are generally acceptable. However, as users work more from home or remotely, bring-your-own devices are encouraged by employers, and hybrid cloud working becomes the norm, these presumptions start to come under a lot of scrutiny.
Zero trust is a security strategy that addresses these concerns and provides three fundamental tenets: never trust and always verify, implement least privilege, and assume a security breach.
It’s important to note that zero trust isn’t a single software application, or a management directive, but is instead a whole strategy that involves every person within an organization, as well as the technology.
Never trust and always verify usually falls within the thought domain of most technologists working in a broadcast facility, so this is nothing new for them. However, the IT methodologies must change where every device and user connected to the system is assumed to be a potential hack or security breach.
The concept of trust is a human attribute that we’ve applied to computer security. We have varying understandings of what it means to be secure, but the actual effectiveness of security is biased towards our human emotions. For example, somebody we know within the organization may be wearing their security badge and therefore we assume they have a right to be there. It might be that they have been fired from the company without our knowledge, however, and no longer have a pass but have managed to get into the building and are looking to do something malicious. Most of us would assume they should be there but have just forgotten their pass. Indeed, the co-worker should have been challenged, but few people would do this.
These thought processes tend to spread into the security eco-system, and it takes a special type of person to spot and highlight them, the sort of person who assumes everybody is a threat. But if we all walked around assuming everybody was a threat then society would soon degenerate. With some forward thinking, we can make zero-trust computer security effective without affecting society.
Figure 1 – traditional perimeter defense methodologies assumed points of access could be well defined and protected. As we move to a connected world with home-workers, bring-your-own devices, and hybrid-cloud computing, zero-trust mechanisms must be employed.
When a user logs onto the network, they are usually verified by the central authorization system to validate them, and with the perimeter wall approach they would be relatively free to gain access to most servers and processes. But we can improve on this with zero-trust methodologies and go further. For example, the user needs to validate the network they’re logging into so that they can be sure they are not going to be a victim of a man-in-the-middle attack. Then each time the user accesses a network, server, or process, the central administration system can validate them again.
This may sound like a lot of work, but it isn’t. With secure certificates and authentication tokens, users can traverse the network without having to log in at every junction. But importantly, the secure certificates and authentication tokens are being constantly checked by every network access point, server, and process, thus keeping the infrastructure secure.
The processes, servers, and individual networks that make up the broadcaster’s infrastructure must be installed with security mechanisms such as certification or authentication certificates, and this is where the zero-trust thinking starts to manifest itself. IT departments should not allow an application to be installed if it cannot be verified using one of these secure methods.
Least privilege is the idea of only providing users with read, write, and execute access as needed for their role. It’s very easy to give every SoftDev engineer access to every project in the software repository. But this has the potential to cause massive security issues, especially when considering disgruntled employees, or even a breach of a user’s login credentials. In this case, only the specific projects a SoftDev engineer is working on should be made available to them.
There should be a forensic audit process when giving users access to any network, server, storage, or process with regular reviews. A database of all users and their access rights should be held that is regularly reviewed so that users are not left with privileges they do not need.
It might look like there is a lot of process going on and it’s fair to say, compared to the old ways of broadcast attitudes to security, there is. But these processes not only protect the here and now, but they also allow IT specialists the opportunity to forensically investigate breaches should they occur, and this is incredibly important for learning from mistakes.
Assuming a breach in security further improves system designs. For example, we should question what happens to all the high value media assets if the storage system is breached. Should they be ringfenced with alarms so that every time a user accesses this area a notification is sent? Or, if we assume that an encryption virus successfully attacks, what will be the impact on stored data and the overall business?
It might seem that we are running the risk of falling down a deep and winding rabbit hole with this type of thinking, but what we have done is engage in risk management as well as prevention. By assuming that a hack or breach will occur, we can more effectively assess the risk and impact to the business.
Zero-trust methodologies are more than just a new process or software application, instead, they encapsulate a whole new method of thinking and working practices that delivers better security and risk management. But key to leveraging its power is the ability to provide logging and monitoring to provide forensic analysis should a breach take place, which we must assume will.
You might also like...
Training neural networks is one of the most import aspects of ML, but what exactly do we mean by this?
The more digital TV technology advances, the more the fundamental elements of TV remain the same.
As the wider broadcast industry picks up the pace with virtualized, cloud-native production systems we take a look at what audio vendors currently have available and what may be on the horizon.
One cannot get very far with electricity without the topic of batteries arising. Broadcasters in particular have become heavily dependent on batteries to power portable equipment such as cameras and lights.
The Sponsors Perspective: Proactively Monitor IP Video Networks & Essences With Inspect 2110 & PRISM
For over two decades Telestream has streamlined the ingest, production, and distribution of digital video and audio. Today, compared to its SDI/AES-based predecessors, IP video adds exciting new challenges to these workflows.