Networked system call for continued vigilance.
The news of the potential vulnerability of processors to attack is just the latest discovery in a long line of vulnerabilities of networked computer systems. It is a salutary reminder of the game of whack-a-mole that systems administrators have to play in order to protect computer networks. Coming at a time when broadcasters are embracing software applications running on commercial-off-the-shelf systems (COTS) and outsourcing to cloud service providers, it reinforces the need for continuing vigilance to ensure business continuity and to avoid the theft of media files.
Engineering is always a compromise, in this case the advantages of networked systems versus absolutely secure systems. There is balance between the cost of security and the business benefits of connected systems.
At one time computer system were isolated; there was an air gap between them and the outside world. An edit controller was a hub, connected via RS422 data connection to tape decks, a CG, a switcher and disk stores. It was a bounded system, it had no external data connections.
In the newsroom environment, it became necessary to offer internet access to journalists as a basic tool for research, email as well is essential. The news editing workstations can be run on a separate media network, but often systems are combined with techniques like virtual machines that allow editing and web browsing on the same workstation. Network firewalls provide the protection.
In the playout environment, COTS equipment has taken over in a networked environment that supports the traffic flows of ads and program file from storage to air servers. Again, such system can be bounded, but the need to load playlists and retrieve logs often require networks access.
Command and control has become much easier with supervisory systems that monitor the health of system components. Such systems are now considered essential to pre-empt and manage faults.
As we move from SDI to IP as the means to interconnect video and audio systems, we lose the security of SDI, point-to-point connections that don’t provide an entry point for hackers.
Spectre and Meltdown present a different problem from general security vulnerabilities. When the weakness in in the code, and fix can be implemented with a software update. When the vulnerability lies in the hardware design, inside the CPU, it becomes more difficult to resolve without swapping out the hardware once an new hardened design comes along. The fix for Spectre and Meltdown is being implemented in the operating systems, but the downside is that it may impact on performance in some applications.
As broadcaster move from specialized hardware to software apps running on generic COTS equipment in data centers, this hit on performance is going to mean either using more processor cores or taking more processor time. Running cloud apps means bigger bills, using more hardware means increased CAPEX. This cost hit has to be factored into the spreadsheets when designing new systems. Realistically it is going to be a year or more before these vulnerabilities can be designed out of processors, so in the meantime it is an unwanted cost to bear, how much is not yet known.
Awaiting the benchmarks
Right now, I’m sure systems administrators will be running benchmarks and monitoring performance. Apps that use GPU acceleration are going to be OK, those products that run on the CPU only are more at risk. With performance hit of 5 to 30% being quoted, this is not good news. The actual values are more likely to be the lower numbers with 30% being a worst case. The next month or so should reveal the real impact.
Old school engineers will remember the days when equipment had no microprocessors, let alone the replacement of hardware processes with software running on regular COTS servers. However, those days are long gone, modern systems have too many business advantages. The down side is the need for constant vigilance and preemptive security measures, today’s cost of doing business.
You might also like...
Audio over IP (AoIP) has become one of the most important technologies to ever enter the media landscape. The protocol allows facilities to leverage today’s mature IP platforms for audio applications resulting in lower costs, faster installations, improved quality o…
Until now, 4K/UHD and high dynamic range (HDR), in many ways, has been little more than a science project, as manufacturers have struggled to convince production entities of the long-term practicality and viability. Fears of overly complex pipelines and…
The first commercially available helium-filled hard drives were introduced by HGST, a Western Digital subsidiary, in November, 2013. At the time, the six terabyte device was the highest capacity hard drive available. Backblaze, a major hard drive user, wanted to find…
In this series of articles, we will explain broadcasting for IT engineers. Television is an illusion, there are no moving pictures and todays broadcast formats are heavily dependent on decisions engineers made in the 1930’s and 1940’s, and in this art…
Building reliable, flexible IP networks requires an understanding of infrastructure components and the interoperability of systems that run on them, especially when working in fast-paced, dynamic studios. Protocol interfacing is relatively straightforward, but as we investigate application level connectivity further,…