There are many different requirements and expectations in the field of computer security and it may be as well to set out what some of them are. Computation is everywhere, from the ubiquitous iPhone to the military computer in an underground installation, and it is unrealistic to think that one security approach would suit all requirements. Computer security is also awash with buzzwords and these will crop up here with definitions of what they might mean.
There are three conflicting requirements in computing, as shown in Fig.1. These are economics, security and speed. The last two are related, because in an appropriate application, the faster a processor runs, the more work it can do and so the cost per unit of work goes down. Traditionally, there has always been pressure to make computers go faster and faster and that became the overriding ethic. Anything that slowed the computer down was bad news.
That bad news included security, because a secure computer must spend more time being suspicious, meaning less time earning its keep. Being invisible, software often isn't thought about when it should be. No one complains about the extra cost of fitting airplanes with more than one engine so they stay up if one fails. No one complains that cars have dual circuit brakes and seat belts. Many computers, most computers, however are insecure. With the expenditure of a little extra they could be more secure, but this has generally not been done.
In many cases, the implementation of safety procedures has come from government. No individual manufacturer will sacrifice market share by adding to the cost of his product, but if all manufacturers see a level playing field set by regulations, then fair competition can continue. Thus it is a legal requirement that airplanes be certified, that cars have to be constructed in a certain way, that pilots and drivers have had appropriate training and hold appropriate qualifications.
Fig.1 - The classic performance triangle is shown here. Any two corners can be obtained together, but the third is then excluded.
It is more than likely the case that these procedures and requirements came about because bad things started to happen and there was pressure to do something about it. At one time car theft became almost an epidemic and the result was that car locks, alarms and immobilizers improved out of recognition.
We now have the unfortunate situation in IT that lack of security is causing society considerable loss, due for example to viruses and to theft of insecurely held information by hacking. Yet I don't see any meaningful progress being made to remedy the problem. The present lamentable situation doesn't require any inventions or dramatic breakthroughs to solve it. Solutions already exist but are only being used in special cases, where those trying to use IT for high security applications realized that what was generally available wasn't good enough.
Examples of secure systems include military and aerospace applications. Digital cinema is a good example of a secure system that was designed to prevent copyright theft. Now that production of television, radio and movies is almost entirely in the digital domain, potential theft of copyright material should be uppermost in many minds.
Generally awareness of the implications of IT in the entertainment industry is not very good. IT, for example, destroyed the long-standing business model of the sale of pre-recorded music media by traditional record companies. Instead of embracing it, they fought it and lost.
I think there is also a contradiction between the mindset needed for good security, where discipline is important and the mindset needed for creativity, where the greatest freedom is taken for granted. Possibly an even greater contradiction is between the arcane world of IT and the technical knowledge, or lack of it, of the average politician. There also seems to be a kind of fatalism surrounding computer security, where most people seem to think that nothing can be done and that we just have to live with how it is.
One often comes across uninformed comments to the effect that whatever security is implemented someone will hack it. Nothing could be further from the truth. In fact things that could be done are simply not being done. No human endeavor is 100 percent secure, and it is an impossible goal. In the real world most wrongdoing is the work of opportunists who see simple acts of dishonesty as a source of profit. All that is necessary is to make things a bit harder for the dishonest by taking away the opportunities and it is no longer worth it.
Fig.2 - Rings of protection are possible in computers, placing unknown software far from the kernel, but often operating systems don't take full advantage of them.
In the most common form of computer security used with von Neumann type computers, the processor works in more than one mode. Applications usually run in user mode and the operating system runs in kernel mode also called privileged mode. In some cases there is an intermediate mode called supervisor mode. Fig. 2 shows that these various modes form what are called rings of protection, concentrically arranged like a moat, a castle wall and a keep.
The inner circle is the kernel and should constitute a trusted computing base, which implies that whatever might go on in the rings outside the kernel, the kernel will always do the right thing to prevent anything inappropriate taking place. The memory management unit is considered to be part of the trusted computing base as it has to ensure that user programs can never generate physical addresses outside the user's address space so that whatever software the user runs can do no external harm.
Fig.3a) shows that one could imagine a system having two processors, where one of them performed user computations and one runs the operating system. Each processor has registers and an arithmetic logic unit (ALU). One way of looking at a computer that runs with more than one mode is that instead of two complete processors a single ALU is shared between two machines using a multiplexer as in Fig.3b).
In order to change mode, one processor is halted at the end of an instruction, the multiplexer is switched over and the processor is re-started. It will run in the new mode precisely where it left off, using the registers for that mode that retained their contents when the other mode was operating.
The trusted computing base is a combination of hardware and software that should work seamlessly together. Ideally the software and the hardware should have been designed for one another and validated to ensure they really do what they are supposed to.
In the real world that is seldom what happens. For example, some operating systems are intended to work on a wide range of hardware. If some of the hardware implements four rings of protection but some only implement kernel and user mode, the OS will only have two modes so it can work on the simplest processor, leaving the extra protection rings of the more sophisticated processors unemployed. You get the keep and the moat, but the castle walls are gone.
Another issue is the pursuit of speed. In order to function securely, only the operating system can run in kernel mode. All applications have to be in user mode and if they need something like an I/O transaction they have to request kernel mode to do it. The processor has to perform a mode transition, from user to kernel and back again. The mode transition takes time, depending on the sophistication of the processor hardware.
The actual change of mode in the processor can be rapid, if there are register sets for each mode, but the effects of the change of mode may slow the system down. For example after a mode change any cache memory is unlikely to achieve many hits as it will be caching memory addresses used by the previous mode.
Fig.3b) - The system of Fig.3a) in which there is only one arithmetic logic unit that is shared between the processors. The trusted computing base includes the kernel registers and the memory management unit.
It's trivially easy to speed up an application that needs a lot of I/O to move it from user mode to Kernel mode. That way all the mode transitions are avoided and more processor cycles are available for the application. The solution seems to be better value for money, so why not? Well, the extra performance comes at a cost, which is that the kernel cannot really be described as a kernel any more because the requirements for security have been violated.
Sure, you can increase the performance of a fighter airplane by missing out the ejector seat, and you can save money building a house by not installing door locks. Any fool knows that. What could be more stupid, except possible fitting door locks after the robbery has taken place?
What we have today is a compounding of mediocrities where the circles of protection that ought to prevent hacking and virus attacks are systematically eroded. The trusted computing base is typically flawed because the hardware and the kernel code were not created specifically for one another. The number of rings of protection is reduced so that the same code can run on a range of processors and finally the rings may be dismantled altogether by applications running in kernel mode. You couldn't make it up.
You might also like...
The peculiarities of the motion of planet Earth are responsible for much more than seasons and the midnight sun and it took a while before it was all figured out.
A group of international technology vendors and broadcasters is working on developing and implementing Artificial Intelligence (AI) standards to improve video coding. Calling itself MPAI (Moving Picture, Audio and Data Coding by Artificial Intelligence) they believe that machine learning can…
In the UK we have Oxford v Cambridge. In the USA it’s Princeton v Harvard. The only difference is that one is a boat race and the other is computer architecture race.
This is the second instalment of our extended article exploring the use of the 5GHz spectrum for Comms.
Video, audio and metadata monitoring in the IP domain requires different parameter checking than is typically available from the mainstream monitoring tools found in IT. The contents of the data payload are less predictable and packet distribution more tightly defined…