Spectre, Meltdown and Broadcasters

The news of the potential vulnerability of processors to attack is just the latest discovery in a long line of vulnerabilities of networked computer systems. It is a salutary reminder of the game of whack-a-mole that systems administrators have to play in order to protect computer networks. Coming at a time when broadcasters are embracing software applications running on commercial-off-the-shelf systems (COTS) and outsourcing to cloud service providers, it reinforces the need for continuing vigilance to ensure business continuity and to avoid the theft of media files.

Engineering is always a compromise, in this case the advantages of networked systems versus absolutely secure systems. There is balance between the cost of security and the business benefits of connected systems.

At one time computer system were isolated; there was an air gap between them and the outside world. An edit controller was a hub, connected via RS422 data connection to tape decks, a CG, a switcher and disk stores. It was a bounded system, it had no external data connections.

In the newsroom environment, it became necessary to offer internet access to journalists as a basic tool for research, email as well is essential. The news editing workstations can be run on a separate media network, but often systems are combined with techniques like virtual machines that allow editing and web browsing on the same workstation. Network firewalls provide the protection.

In the playout environment, COTS equipment has taken over in a networked environment that supports the traffic flows of ads and program file from storage to air servers. Again, such system can be bounded, but the need to load playlists and retrieve logs often require networks access.

Command and control has become much easier with supervisory systems that monitor the health of system components. Such systems are now considered essential to pre-empt and manage faults.

As we move from SDI to IP as the means to interconnect video and audio systems, we lose the security of SDI, point-to-point connections that don’t provide an entry point for hackers.

Hardware vulnerability

Spectre and Meltdown present a different problem from general security vulnerabilities. When the weakness in in the code, and fix can be implemented with a software update. When the vulnerability lies in the hardware design, inside the CPU, it becomes more difficult to resolve without swapping out the hardware once an new hardened design comes along. The fix for Spectre and Meltdown is being implemented in the operating systems, but the downside is that it may impact on performance in some applications.

As broadcaster move from specialized hardware to software apps running on generic COTS equipment in data centers, this hit on performance is going to mean either using more processor cores or taking more processor time. Running cloud apps means bigger bills, using more hardware means increased CAPEX. This cost hit has to be factored into the spreadsheets when designing new systems. Realistically it is going to be a year or more before these vulnerabilities can be designed out of processors, so in the meantime it is an unwanted cost to bear, how much is not yet known.

Awaiting the benchmarks

Right now, I’m sure systems administrators will be running benchmarks and monitoring performance. Apps that use GPU acceleration are going to be OK, those products that run on the CPU only are more at risk. With performance hit of 5 to 30% being quoted, this is not good news. The actual values are more likely to be the lower numbers with 30% being a worst case. The next month or so should reveal the real impact.

Old school engineers will remember the days when equipment had no microprocessors, let alone the replacement of hardware processes with software running on regular COTS servers. However, those days are long gone, modern systems have too many business advantages. The down side is the need for constant vigilance and preemptive security measures, today’s cost of doing business.

You might also like...

The Big Guide To OTT: Part 10 - Monetization & ROI

Part 10 of The Big Guide To OTT features four articles which tackle the key topic of how to monetize OTT content. The articles discuss addressable advertising, (re)bundling, sports fan engagement and content piracy.

Next-Gen 5G Contribution: Part 2 - MEC & The Disruptive Potential Of 5G

The migration of the core network functionality of 5G to virtualized or cloud-native infrastructure opens up new capabilities like MEC which have the potential to disrupt current approaches to remote production contribution networks.

Standards: Part 8 - Standards For Designing & Building DAM Workflows

This article is all about content/asset management systems and their workflow. Most broadcasters will invest in a proprietary vendor solution. This article is designed to foster a better understanding of how such systems work, and offers some alternate thinking…

Designing IP Broadcast Systems: Addressing & Packet Delivery

How layer-3 and layer-2 addresses work together to deliver data link layer packets and frames across networks to improve efficiency and reduce congestion.

The Business Cost Of Poor Streaming Quality

Poor quality streaming loses viewers at an alarming rate especially when we consider the unintended consequences of poor error reporting on streaming players.