Computer Security: Part 2 - Understanding Why Computers Work The Way They Do

To see how to make computers secure, we have to go way back to see how they work.

Once upon a time there were analog computers. The operational amplifiers used in audio mixers are descendants of this technology. The analog computer had to be programmed by physically adjusting a lot of different components. It was tedious, but it couldn't be hacked; the term wasn't known then.

The digital computer then appeared and it was more flexible, because its actions are determined by stored program. The stored program was basically a set of instructions, in the form of binary numbers, that were stored on one of the binary media of the day that had been developed for knitting machines, census results and street organs. Later electronic memory was developed, which allowed the data to be stored, erased and re-written, instead of punching holes in cardboard.

The precision of a digital computer is determined by the word length, and follows the same rules as digital audio and video. However, computers could be made to work in double precision, which means that the CPU would have two swipes at the data, turning, for example, a 16-bit machine into a 32-bit machine, albeit slower.

In a simple machine, the size of the memory the processor could work with is also governed by word length.

One of the computing pioneers was John von Neumann, and he had the idea that the stored program, the input data and the results of the computation could all be stored in the same memory, giving the greatest flexibility.

As memory was then very expensive, having everything in the same memory meant that a small program could work on larger amounts of data. The so-called von Neuman computer architecture was almost universally adopted. Having all its eggs in one basket, so to speak, the von Neumann computer was not tolerant to errors. For example if there was a mistake in the program a result might get written in the wrong place and accidentally overwrite an instruction, crashing the program.

Fig.1.  - In memory management systems there is an adder between the virtual address produced by the processor's program counter and the memory. An offset is added to the virtual address to create the physical memory address.

Fig.1. - In memory management systems there is an adder between the virtual address produced by the processor's program counter and the memory. An offset is added to the virtual address to create the physical memory address.

Back then, things like universal computer ownership, and the Internet could not have been foreseen, nor was the emergence of digital dishonesty and disruption anticipated. Unfortunately, the von Neumann architecture is not only highly flexible, but it is also prone to hacking, because once in the basket, all the eggs are available. As we will see in due course, one way of making a hack-proof computer is to abandon or modify the von Neumann architecture for something different.

Early computers were extremely expensive and it was a purely practical matter that a number of different users would get access to the same machine using a process called timesharing. In a simple system, each user was allocated a time slot during which his code ran. The timesharing system succeeded for three reasons. The first was that a way was found of switching between programs almost instantly so no time was wasted. The second was that the computer appeared exactly the same to each user. The third one was that ways were found to prevent errors in one user program from messing things up for everybody else.

From the standpoint of the users, there is no real difference between a computer that crashes due to an honest mistake and one that that crashes due to malicious intent. It follows that a computer that can survive programming errors made by users is likely to resist deliberate acts.

All of these systems worked on what is called virtual memory, which requires a device called a memory management unit between the CPU and the memory.

All users can write code beginning at address zero, and their code appears to run in that way, but it is only the program counter in the CPU that begins at zero. Fig.1 shows that the memory management unit adds a constant to the program counter output, which is now a virtual address so that the instructions are fetched from a different area of memory having a physical address.

Something similar happens in television DVEs. In order to shift the picture across the screen, the picture is stored in a memory and when it comes to be read out, the memory addresses are messed with the equivalent of a memory management unit so the picture comes out in a different place.

DVEs were an electronic implementation of the techniques used in map making. Making a flat map of a spherical Earth required projection and ray tracing techniques to be applied, manually at first. When address manipulation began to be used in this way, the technique was called memory mapping.

A further use of memory management is that a processor having a certain word length could now work with a greater quantity of memory than it could address directly. The memory management unit could map the range of the processor into different parts of the memory. In digital audio we would call it gain ranging.

Fig.2a) shows a simple case in which two users are time-sharing a processor. If it is supposed that the time slot of one user is to end and that of another user is to begin, both programs can be at different places in the memory, and the change over is achieved by loading a different offset into the memory management unit.

Clearly the control of the memory management unit must be at some level above that of the user programs, and such a machine requires an operating system to coordinate everything. It would be possible to imagine a Catch-22 situation where the operating system needs to load offsets into the memory management unit in order to access its own instructions, but it can't do that without the memory management unit.

The solution is in suitable hardware design. Fig.2 also shows that the operating system instructions are stored in the first memory page, in which the virtual and physical addresses are identical. If the start-up mode of the memory management unit is to apply zero address offset, the operating system can access its instructions.

Fig.2 - A simple memory management system allows the memory to be bigger than the address range of the processor. Two different user programs can reside in the memory and the processor accesses them via different address offsets. The operating system is not mapped.

Fig.2 - A simple memory management system allows the memory to be bigger than the address range of the processor. Two different user programs can reside in the memory and the processor accesses them via different address offsets. The operating system is not mapped.

In a simple computer, some of the address range has to be reserved so that the processor can address peripheral devices, reducing the amount of memory that could be used. With memory management, that would only apply to the operating system. A user program could explore the whole of the virtual address space, mapped to physical memory.

Typically the memory management unit would be hard-wired to map the top of the virtual address range of the processor to the top of the physical address range where the peripheral device addresses are located. The memory management registers are in that space too.

The final ingredient is also implemented in hardware. The processor can operate in more than one mode. One of these modes, the kernel mode, is reserved for the operating system. Only in kernel mode will the memory management unit map addresses to the peripheral page. Only in kernel mode can memory offsets be changed. User programs cannot run in kernel mode, but must run in user mode, that has limited access.

In user mode, the user program has complete access to a dedicated area of memory, but no access to anything else at all. Users cannot access peripheral devices. Input data can only arrive in that memory area if the user asks for it. The actual data transfer will be performed by the operating system, which alone has access to the peripheral registers and can fetch individual words arriving from a terminal, or initiate direct memory access (DMA) transfers from a disk drive.

Equally the user cannot output any data at all. Results of a user calculation must be left in a user memory area and the user must request the data to be written. Only the operating system can set up the DMA so the disk drive can transfer user data to a file. The user has no say in the location on the disk where the data are written.

Properly implemented, a system like this is resistant to programming errors, because whatever a user program does is contained within the physical memory available to the user. The user has no access to peripheral devices without the involvement of the operating system, and so no honest programming error can affect the operating system or other users.

Further safeguards can be built into the hardware. For example the memory management unit may have access to the processor mode. Any attempt to write a memory management register or create a physical peripheral address other than in kernel mode is an error condition, which the MMU could deal with by forcing the processor back into kernel mode in a mechanism called a trap, where the kernel program counter is forced to a particular value where the error handling routines begin.

These security measures depend on the deterministic nature of hardware and underline that computer security based on software alone cannot work, because it is not deterministic. Unless the computer has an appropriate hardware environment in which its operating system is protected, it cannot be secure.

You might also like...

KVM & Multiviewer Systems At NAB 2024

We take a look at what to expect in the world of KVM & Multiviewer systems at the 2024 NAB Show. Expect plenty of innovation in KVM over IP and systems that facilitate remote production, distributed teams and cloud integration.

NAB Show 2024 BEIT Sessions Part 2: New Broadcast Technologies

The most tightly focused and fresh technical information for TV engineers at the NAB Show will be analyzed, discussed, and explained during the four days of BEIT sessions. It’s the best opportunity on Earth to learn from and question i…

Standards: Part 6 - About The ISO 14496 – MPEG-4 Standard

This article describes the various parts of the MPEG-4 standard and discusses how it is much more than a video codec. MPEG-4 describes a sophisticated interactive multimedia platform for deployment on digital TV and the Internet.

The Big Guide To OTT: Part 9 - Quality Of Experience (QoE)

Part 9 of The Big Guide To OTT features a pair of in-depth articles which discuss how a data driven understanding of the consumer experience is vital and how poor quality streaming loses viewers.

Chris Brown Discusses The Themes Of The 2024 NAB Show

The Broadcast Bridge sat down with Chris Brown, executive vice president and managing director, NAB Global Connections and Events to discuss this year’s gathering April 13-17 (show floor open April 14-17) and how the industry looks to the show e…