Using capabilities to solve security problems is unorthodox. There is a common unarticulated idea that there is just one security problem and a computing system is either secure or not. To some, computer security consists of keeping bad people away from the computer. The next stage is being sure that computers can distinguish the good people at terminals from the bad people. The next stage has only recently gotten substantial press: being sure that the computer can distinguish good programs from bad programs. Yet another stage is now just gaining broad recognition: Letting the bad programs do things for you while not letting them do bad things to you. There is no fundamental reason why a program that you write for me should be able to delete my bank transactions file just because I can. Yet those are the permission rules imposed by the current commercial OSes for personal systems.
The ideas presented here go well beyond those problems. Some say that you should not worry about problems until they become real and that market forces will elicit solutions soon enough. Such just-in-time solutions are usually ad hoc and even ineffective. We propose design principles here that go well beyond commonly identified security needs.
We describe a style of platform where programs can take responsibility for portions of cyberspace and have effective and convenient means to achieve those ends. Such ends include guarding their long term integrity, keeping their secrets and remaining available. These are much the same ends that we expect of a good digital hardware design.
Butler Lampson named and described the Confinement Problem in [Butler Lampson, “A Note on the Confinement Problem,” Communications of the ACM, V 16, N 10, October, 1973]. I am unaware of attempts to solve this problem within orthodox systems.
I described the problem of the Confused Deputy in 1988. That paper argues that the problem is likely to impact many programs designed to wield their necessary authority with discretion. The problem seems endemic in systems, such as Unix, where programs select the files that they will operate on by presenting the file names to the operating system. I have heard no suggested solutions to the general Confused Deputy problem in orthodox systems. I suggest that solutions to such security problems cannot be solved by patching the security rules of traditional systems.
Consider the situation of fire walls today. When a cracker exploits a property of a fire wall to cause damage, the fire wall property will be changed. This may well thwart the cracker but it often breaks legitimate applications as well. These applications were coded according to security rules that were in effect when the applications were coded. It is unclear whether the cracker or the application programmer will find their way around the new fire wall rules first. Capability discipline provides principles upon which to design computing systems and applications so that the applications will not become victims to ad hoc changes in security rules. Without attention to these principles at the foundation of the computing system, these security problems will not be solved. Further applications built upon systems innocent of these principles will not survive intact when ported to secure foundations.
The prevalent attitude towards these security problems is that when the market demands solutions to these problems then orthodox systems will be patched to provide them. I haven’t noticed current orthodox systems being patched to protect against viruses and Trojan horses, however, despite the obvious market demand. (Incidentally capability systems are naturally immune from viruses and the most general sorts of Trojan horse.)
When I had seen only one style of computer circuit design I assumed unconsciously and wrongly that that was the only such design style. Most computer scientists today have only seen one operating systems design style. (Mac and Wintel scarcely have operating systems that are relevant here. (written in 2000)) Most will assume that Unix has the only style of OS security and that security problems unsolvable in Unix are beyond the state of the art of OS design. They may even think that the problems are not well stated.
This note is not meant to convince you that capability design can solve all of the security problems that plague computer system design. Indeed this web site with all of its pointers is probably insufficient. Only experience with a complete system built on capability ideas may be able to do that. As I first began to consider capability designs, I lacked confidence that they could provide a complete foundation for computer usage. Only with the completion of a very general system did we gain confidence in the generality of capability solutions. The material here describes that system in considerable detail. This system was built for commercial timesharing where code by programmers with incompatible loyalties; ran in the same machine, and needed to cooperate. Our capability design therefore paid more attention to cooperation with protection than most of the capability systems that arose in academia.
Richard Uhtenwoldt, as a newcomer to capabilities, suggested more eloquently and briefly then I have been able, claims for the pertinence of capabilities to security.