PREVIOUS

Chapter 4. Security in KeyKOS

KeyKOS uses keys as its entire access mechanism. There is no need for file passwords, access lists, or other easily compromised technologies. Further, KeyKOS compartmentalizes applications into distinct units with impenetrable shields, thus eliminating many covert channels for information flow. The following examples illustrate how this architecture solves many common security problems.

The Trojan Horse Problem

Probably the best known security exposure in most existing systems is known as the Trojan Horse problem. A program, received from some other person, is to be run by a user. When the program runs, it has access to many files nominally owned by the person who is running the program. The program may copy data to a place known by the program author, or perhaps more maliciously, alter or erase files belonging to the person who ran the program.

To understand how KeyKOS solves the Trojan Horse problem, it is useful to consider how a program runs in a conventional system and contrast it to KeyKOS.

To compile a program in conventional systems (the compilers are generally considered to be benign programs, but the pattern is the same) a user typically says something like "COMPILE program" to the command language interpreter. The command language interpreter starts up the compiler, which then asks the system for access to files named "program.SOURCE", "program.LISTING", and "program.EXECUTE". During its execution the compiler reads and writes the files as is appropriate.

There is no mechanism to keep the compiler from also reading the file "SUPER.SECRET.DATA" and copying a part of it to another "SECRET.PLACE" which was owned by the author of the compiler.

To compile the same program under KeyKOS a user would issue the same kind of command, "COMPILE program". However, before starting up the compiler (which is in a separate domain) the command language interpreter would get keys to the three files, "program.SOURCE", "program.LISTING", and "program.EXECUTE". The command language interpreter would call the compiler, passing these three keys as parameters. (The key to "program.SOURCE"" might be a read-only key.) When the compiler runs, it can read and write to the three files, but no others. Thus all data owned by the person who ran the program is protected. When the compiler terminates, it is possible to destroy the particular domain that contained the instance of the compiler which compiled the program. This destroying operation insures that the compiler does not remember anything about the program it compiled, and is thus not able to pass that information along later to another caller.

One other property of KeyKOS is important to this discussion. A program running in a domain cannot tell much about the environment in which it runs. It cannot tell which user is calling it or recognize when it is supposed to collect secrets.

Protecting Proprietary Programs

The current technology for protecting proprietary programs and data bases requires the goodwill of purchasers. There is no protection available in IBM operating systems to help enforce such contracts. In these systems, users of proprietary programs can make copies and carry them away. Even in the most secure systems it is possible for a user at a terminal to display the instructions in a program and copy them by hand. In KeyKOS, it is a simple matter to put a proprietary program or data base into a domain and to give keys to all prospective users. Users may invoke their keys to have the domain perform its service. However, there is no way for them to copy the domain to another place, or to access its contents in any unauthorized way. Thus, the program or data base is completely protected from theft.

Protected Entry Points

Many of the protection schemes in conventional operating systems rely on voluntary user cooperation. For example, many data base management systems are built upon standard system access methods. The data base manager enforces restrictions upon what information a particular user may see or change. In many cases, however, the user may access the data base files directly, bypassing all the security features of the data base manager. Similar exposures exist in many subsystems that perform error checking at entry. An enterprising programmer can initialize all working registers and enter the subsystem at some location other than the intended entry point, thus bypassing all validation tests.

These exposures do not exist for programs operating in KeyKOS. While it is still possible to build a data base manager which uses a file system to store its data, the keys to access the files can be closely held by the data base manager, completely protected from other domains. Similarly, if a subsystem is placed in a separate domain from its callers, it is also protected. Since the only thing a domain can do with a key is to call it, the caller has no way of influencing where the called domain will start. KeyKOS has no need for all of the complex validation often found in today's subsystems since the fact that the calling program was able to call the subprogram implies that it had the necessary authority.

Mutually Suspicious Users

Often two programs need to interact with each other, yet neither of their owners wish to give proprietary information to the other. An example of this situation is a data reduction program proprietary to a software vendor that processes seismic data proprietary to an oil company. If the software vendor does not trust the oil company, and the oil company does not trust the software vendor, there seems to be no way for them to do business. The best that conventional systems can do is protect either the program or the data, not both.

In KeyKOS the program can be placed in a domain. The program owner can create the domain using a KeyKOS facility that allows the caller to determine whether the domain will have unacceptable communication channels. The program owner is protected by the domain. The data owner is protected because the domain can not communicate the data in unacceptable ways. Both the program and the data are completely protected from unauthorized access.

Figure 4-1

Trusting Programs and Operators

One of the security exposures in traditional computer systems is that both systems programmers and operators can access all of the information contained in the system. In KeyKOS, only people who hold a key to a domain can call it, and keys are not counterfeitable. This fact has some interesting implications. For example, on most systems periodic backups are made of files as protection against inadvertent erasure or hardware errors. Operators and systems programmers have the ability to restore or repair files that have become unusable. In a KeyKOS system this will only be possible for files registered with a system agency responsible for backup. Backup functions can be provided under KeyKOS, but the option of greater security is also available.

Auditors

Auditing in KeyKOS is quite different from auditing in conventional operating systems. Typically, a security audit inspects the keys domains hold and the resulting possible flow of information. An audit of a domain's keys reveals exactly what other objects it can invoke, and how it can invoke them (e.g. read-only). When it is necessary to know more detail about the interactions between two objects, it is a simple matter to insert an auditing domain between them. The auditing domain may inspect each transaction and log those of interest to the auditor. Neither of the original objects can detect that transactions are being audited. Thus, they are not likely to change their behavior because of the audit.

Note that it is not necessary to inspect the code of all domains to insure security. A domain cannot interact with any outside entity except through its keys. The security auditor need not inspect code in the domains which do not have a security function, since the only thing they can do is compromise their own integrity. (If it is not possible to test security domains adequately by examining their inputs and outputs, it may be necessary to inspect their code to insure they perform their specified functions correctly.) The auditor clearly has to have a great deal of authority to be able to splice into arbitrary connections between objects.

This authority must be explicitly established at the time the application is created. Thus, the auditor's authority is controlled by the same security mechanisms that control all authority in KeyKOS.

Military Multi-level Security Problem (1)

In a simplification of the military multi-level security environment, there are four classes of data, known as Top Secret, Secret, Confidential, and Unclassified. These four classes must be completely isolated from each other.

In KeyKOS, this may be accomplished by creating, at system initialization time, four distinct sets of domains. Each set is completely self contained - that is, only holds keys to other domains in the same set. The architecture of the KeyKOS kernel insures that no domain can ever interact with any domain outside of its set. Within a set, the domains can interact in whatever way is appropriate, with no possibility of compromising the security of the system. In effect, each set of domains runs in a self contained environment, exactly as if it is on a separate machine. No communication is possible between the sets. The only part of the system which must be trusted is the kernel since it is the only part which interfaces with more than one security classification.

Military Multi-level Secure Problem (2)

The previous model does not adequately describe the real world, because the various classes of information must interact according to carefully prescribed rules:

To control the flow of information between different classification levels it is necessary to install filters. These filters can be implemented by domains, perhaps supplied by one or more government agencies, trusted by those responsible for security. In general, it will be necessary to trust any domain that holds keys to more than one classification. The analogy with three distinct systems holds here. The agency needs to trust the individual who carries a tape from one system to another system of a different classification.

Denial of Resources

Unfortunately, all conventional systems are vulnerable to a class of problems known as denial of resources. Denial of resource problems do not cause programs to perform incorrectly or compromise the integrity of data; instead, they make the entire system or some critical resource unavailable. Examples are: an endless channel program that monopolizes a channel, and a program that writes thousands of messages to the operator, thereby crashing the system by using all of its main memory. In either case, part or all of the system is unavailable to users. This situation could have serious consequences if a business depended on the continuous availability of the system.

The design of KeyKOS applications limit denial of resource problems. The vast majority of the application is implemented in domains that are incapable of monopolizing any system resource. The kernel itself has extensive defenses to protect itself from denial of resource penetrations.

NEXT