Spectre, Meltdown and Broadcasters

The news of the potential vulnerability of processors to attack is just the latest discovery in a long line of vulnerabilities of networked computer systems. It is a salutary reminder of the game of whack-a-mole that systems administrators have to play in order to protect computer networks. Coming at a time when broadcasters are embracing software applications running on commercial-off-the-shelf systems (COTS) and outsourcing to cloud service providers, it reinforces the need for continuing vigilance to ensure business continuity and to avoid the theft of media files.

Engineering is always a compromise, in this case the advantages of networked systems versus absolutely secure systems. There is balance between the cost of security and the business benefits of connected systems.

At one time computer system were isolated; there was an air gap between them and the outside world. An edit controller was a hub, connected via RS422 data connection to tape decks, a CG, a switcher and disk stores. It was a bounded system, it had no external data connections.

In the newsroom environment, it became necessary to offer internet access to journalists as a basic tool for research, email as well is essential. The news editing workstations can be run on a separate media network, but often systems are combined with techniques like virtual machines that allow editing and web browsing on the same workstation. Network firewalls provide the protection.

In the playout environment, COTS equipment has taken over in a networked environment that supports the traffic flows of ads and program file from storage to air servers. Again, such system can be bounded, but the need to load playlists and retrieve logs often require networks access.

Command and control has become much easier with supervisory systems that monitor the health of system components. Such systems are now considered essential to pre-empt and manage faults.

As we move from SDI to IP as the means to interconnect video and audio systems, we lose the security of SDI, point-to-point connections that don’t provide an entry point for hackers.

Hardware vulnerability

Spectre and Meltdown present a different problem from general security vulnerabilities. When the weakness in in the code, and fix can be implemented with a software update. When the vulnerability lies in the hardware design, inside the CPU, it becomes more difficult to resolve without swapping out the hardware once an new hardened design comes along. The fix for Spectre and Meltdown is being implemented in the operating systems, but the downside is that it may impact on performance in some applications.

As broadcaster move from specialized hardware to software apps running on generic COTS equipment in data centers, this hit on performance is going to mean either using more processor cores or taking more processor time. Running cloud apps means bigger bills, using more hardware means increased CAPEX. This cost hit has to be factored into the spreadsheets when designing new systems. Realistically it is going to be a year or more before these vulnerabilities can be designed out of processors, so in the meantime it is an unwanted cost to bear, how much is not yet known.

Awaiting the benchmarks

Right now, I’m sure systems administrators will be running benchmarks and monitoring performance. Apps that use GPU acceleration are going to be OK, those products that run on the CPU only are more at risk. With performance hit of 5 to 30% being quoted, this is not good news. The actual values are more likely to be the lower numbers with 30% being a worst case. The next month or so should reveal the real impact.

Old school engineers will remember the days when equipment had no microprocessors, let alone the replacement of hardware processes with software running on regular COTS servers. However, those days are long gone, modern systems have too many business advantages. The down side is the need for constant vigilance and preemptive security measures, today’s cost of doing business.

You might also like...

Minimizing OTT Churn Rates Through Viewer Engagement

A D2C streaming service requires an understanding of satisfaction with the service – the quality of it, the ease of use, the style of use – which requires the right technology and a focused information-gathering approach.

Designing IP Broadcast Systems: Where Broadcast Meets IT

Broadcast and IT engineers have historically approached their professions from two different places, but as technology is more reliable, they are moving closer.

NAB Show 2024 BEIT Sessions Part 2: New Broadcast Technologies

The most tightly focused and fresh technical information for TV engineers at the NAB Show will be analyzed, discussed, and explained during the four days of BEIT sessions. It’s the best opportunity on Earth to learn from and question i…

The Big Guide To OTT: Part 9 - Quality Of Experience (QoE)

Part 9 of The Big Guide To OTT features a pair of in-depth articles which discuss how a data driven understanding of the consumer experience is vital and how poor quality streaming loses viewers.

Chris Brown Discusses The Themes Of The 2024 NAB Show

The Broadcast Bridge sat down with Chris Brown, executive vice president and managing director, NAB Global Connections and Events to discuss this year’s gathering April 13-17 (show floor open April 14-17) and how the industry looks to the show e…