Open source software has undoubtedly made a massive impact on broadcasting workflows. But should we be concerned about the use of open source when considering security?
There are those who argue open source software is limited because it is free. It relies on volunteers who code in the dead of night to bring the next load of features and make security fixes when they can. However, there are others who argue the developers who code these applications and operating systems have a higher calling and are more interested in the challenge and fighting for their cause.
In other words, the developers and coders who make up the open source community have a massive motivation. Not all open source projects are successful but some have clearly stood the test of time, such as FFMPEG and Linux. It’s difficult to suggest the open source community isn’t committed when, according to W3Techs, 37% of the servers that make up the internet are running Linux. And how many broadcasters are using FFMPEG in one way or another?
The security argument is a bit more challenging as the perception may be that we have to wait for a bunch of developers to get home from their day jobs before fixing the latest security breach. However, the reality is quite different, especially for companies that provide professional software services as they have a vested interest in making sure open source software such as Linux is highly secure. Not only do they have the necessary skill set consisting of highly experienced security specialists, but they also provide rigor and process to their application of security fixes.
For me, the question of the security competence of open source as a whole is not relevant as we can’t apply a single judgement over every open source application. They vary enormously in their levels of support and available skill set. Instead, I believe we should be applying the zero trust method of security, not just to the open source code, but to every system and user in the broadcast and IT infrastructure.
Zero trust, also referred to as perimeter-less, is the concept that no process, software application, operating system, user, server, or network, can be trusted. Essentially, every user and device must never be trusted and always verified. This is more of a methodology and philosophy of operation that starts at the CEO and works its way through the whole company. For example, users must be constantly verified when accessing every software application through a central rights database, and software must be considered vulnerable. This leads to questions such as “if this transcoding server were compromised, what effect would it have on the rest of the broadcast infrastructure”?
By creating the zero trust policy, the question of whether open source is secure or not goes away. We should assume it is insecure, as with all other software, and then create measures to protect the rest of the facility from it. This isn’t just about buying the enterprise version of the code, but truly understanding where the code fits in the infrastructure and its implications if it was compromised. In other words, is there a u-link we can pull to stop the damage? And if not, why not?