Notes for ACCU talk

Abstract:
There is much talk these days, even among computer professionals, that computer security is hopeless. I will argue that while it might be a slog, there are simple paths towards reliability and security. The insight is that there may be no complex paths. Our current trajectory is away from reliability and security. Function on today’s platforms is vulnerable to all the complexity running there. The capability paradigm allows function within a complex system that is vulnerable to very little.
My goal this evening is to convince you that simple software platforms are possible that host all of today’s complex applications, and eliminate most of the deleterious interactions, as well as the ramifications of hostile signals from the outside—to keep the peace. Furthermore they can host enough Linux emulation to accommodate legacy software that evolved in the Linux world.

A little history

The foundations of modern computer systems are too complex. We do need the new complex applications. They do useful and fun things. But when you do banking your password should not be at risk to some game in the background written by someone in a country that you cannot locate on the map.

Some suppose that the best foundational security is be sure that you are enforcing Linux or Unix access rules. But those rules allow a Solitaire game to delete all your files.

Both Apple and Google have heard this complaint long enough that they added features to block that particular crime in iOS and Android.

There are many slightly more subtle holes in the standard rules.

Another problem is that the software is so complex that the rules are not consistently enforced—kernel bugs—zero day exploits.

OCaml and gcc distributions for the Mac want me to give their installers authority to remove all of the software, along with the kernel, when I install their software. That is more authority than the compilers themselves need. I hope you see a problem. There are better ways, even in Linux.

Privacy (for code)

In the 60’s system designers began to plan on multi programming where several programs could get sporadic access to one CPU. It was clearly understood that these competing programs would each need their own private part of memory with actual protection from erroneous stores by other resident programs. Such protection was provided and multiprogramming became common. There was little or no thought given to protecting files that kept data from day to day. Actually most of that data was kept on magnetic tape and computer operators were trusted with physical security of the tape reels. As disks entered the scene an enforced theory of access was haphazard. It was not even clear what software would be responsible for such security.

Timesharing

The success of multiprogramming led some to envision programmers and other computer users sitting at terminals interacting with the machine. Core (RAM) was expensive and programs would be brought to memory from disk only when it was time for them to run.

The debug cycle was dramatically reduced. So were many other computer activities, especially as data moved from magnetic tapes to big disks.

The security was to identify individual people and assign disk files to them. Each user logged in with a password and thus gained access to the files be had left the previous day. Various special rules were established to share files such as compilers. There were variations on these rules between early timesharing systems. Unix provided its own set and Unix was more portable (due to C) and access rules have not much changed since then.

I am talking about an old idea: “Capabilities”. There may be disciplines other than capabilities that can achieve this but I don’t know any. I know about capabilities and that is what I will talk about tonite.

Already, in 1965 Dennis and van Horn built a fine capability based multi-access computer system for the PDP-1. (Programming Semantics for Multiprogrammed Computations) It was not much noticed.

There were a variety of other capability thrusts in the backwaters of computer science, many of them involving novel hardware. About 1970 Plessey built the Plessey 250 which was a simple capability hardware system for control of early telephone switching systems which have stringent reliability requirements. I think that system is still running in some military communications applications. (Rumors of other military applications in 2017.) IBM’s System 38 involved capability hardware but IBM reserved its function to internal system matters. Hank Levy wrote a book on the history of capability hardware. You can bet the benefits of capabilities without capability hardware, but like floating point, it is faster with hardware help.

When routine X with an array “elevations” of floats needs the expertise of another routine Y on such arrays, X calls Y with a call site such as “Y(elevations);”. Note that as the program runs the ASCII string “elevations” plays no role. The string is probably not even in the computer. Further more there is no code checking file permissions as control is taken by routine Y. X has ‘native’ access to elevations and efficiently sends that access to Y as an argument which is very much like a capability. If the call site in X had been Y(tilt); then Y would have had access to tilt instead. For this to be secure we need assurance that X and Y were compiled in a memory safe language and loaded by a type savvy loader. This sort of security is seldom (never?) relied upon between adversaries today. Capability discipline allows such patterns to be secure even with adversaries. Hardware capability systems may accomplish this without changing address spaces. Without direct hardware support Keykos does this by switching address spaces.

Kernel Issue

(abrupt veer) Some capability enthusiasts want to move capability theory to the realm of crypto. Then each player, in the realm of mutually suspicion, would arrange their own physical protection and cryptography. Of course they could find a vendor to do this difficult work, and trust the vendor. Then there is physical location. Sometimes there is economy of scale; witness the recent cloud rush. If you are trusting the vendor it may sometimes be wise to do the computing where the vendor is responsible for physical security. It is a matter of scale and trust. National Security Letters are an issue for some. If some of those with whom you need to interact, choose the same vendor, there may be many opportunities to send messages not intermediated by crypto or the overhead and latency of Internet. There may be a few orders of magnitude performance gain here, not to mention latency. This matters sometimes. It might become a reason to choose a certain vendor. There is a 50 year old technology in kernels that provide capability security where a message can be sent across a protection boundary in a few hundred instructions. It is called the (capability) kernel.

That is one reason for a kernel. An even bigger reason is that modern computers have accreted a vast quantity of mostly useful software written by programmers each with view of a small part of the system to which their programs contribute. This collective code needs to interact. The sorts of interaction permitted by the Linux kernel or the other commercial offerings, are small extensions to Unix and too coarse grained. Those sorts are usually too much or too little. In a capability system there is a place for an application to keep its data from one day to the next that is inaccessible to almost any other program that runs on the machine.

Capability (Saltzer & Schroeder)
In a computer system, an unforgeable ticket, which when presented can be taken as incontestable proof that the presenter is authorized to have access to the object named in the ticket.

There is not a crisp definition agreed upon even by capability aficionados, but they know them when they see them. Capability discipline is enforced by some combination of hardware and software, just as timesharing systems kept actions of users’ programs from harming or even reading data from programs of other users. Unix developed the sorts of rules that dominate today’s kernels. Linux changed little. Today few programmers are aware of access rules other than those enforced by Linux, perhaps slightly modified by iOS or Android. Capability discipline is a different sort of rule set.

A capability is something that a program holds and uses to get information or act.

A capability system, or platform is one where programs acquire information or act only via capabilities.

In Linux, when you open a file with the open system call you get back a file descriptor, fd. The fd is an index into a table of capabilities but Linux is not a capability system. The command “man 2 open” says that open("xyz", 2) returns an int. The kernel keeps an array of file descriptors for each process and the returned int is an index into this array. It is your array but you can’s see the bits there. “man 2 read” takes this integer and reads data out of the file.

char buf[1024];
… 
  int cap = open("a.c", 2);
  ssize_t x = read(cap, buf, 1024);
  write(1, buf, x);
reads a buffer into memory using a capability cap. 1 is the index in to the array for the capability to write on the teletype. Linux is not a capability system because of how the program acquired the capability and also because there are ways to act that do not require file descriptors.

Rules

Specific Capability Platforms

Keykos and several other systems provide these functions in a small package. The Keykos kernel is about 250KB of machine code. That includes virtual memory, RAM as cache of disk, and orthogonal persistence. Most applications find they need about another 250KB of function provided by other code that runs confined by capability discipline (runs in user mode).

Coyotos is a more modern system that takes many ideas from Keykos.

Capsicum and CHERI are two platforms that share the same hardware. Capsicum attempts to directly preserve Linux applications. CHERI does not try to do that at least at a fundamental level.

seL4 is a very interesting system in development.

Human Interface, a Challenge

The capability systems that I have seen had a teletype interface, or its logical equivalent. This was simple, indeed too simple for some of useful things we want to do with computers. There are many things inside a personal computer and the user must be able to distinguish between them, even when some object may try to impersonate something it is not. How many times have you been asked to type in your password and wondered what program was actually going to receive it? The modern screen is a window onto that world but today such screens are controlled by software that is very complex. A challenge is to design a graphic user interface so that simple critical tasks need not rely on a monolithic graphics package. Furthermore the user must be aware of what screen parts are controlled by which computer entities. He must also know where his key strokes are going and who can see his cursor acts. Programs should not be able to read the clipboard content at arbitrary times. The first priority, however, is to convince the user that there are objects that you may want on your screen, but that you do not want to trust with your bank routing numbers. This is not an alien concept. People have understood for 10K years that there are parts of a city where you do some things and not others and there are parts where you do not go.

Another challenge is to teach them that some spots on the screen are capabilities, in the dictionary sense. A spot on a page from you bank, is the capability to query you account balance and dragging it to a window where you are filling out a loan application will give that other institution the ability to query your account balance. Dragging a file icon into an edit app is already known with nearly the right implications. Dropping the icon onto an attenuator that makes access RO capability is another thing to study. Or perhaps you type an r while dragging the icon and it changes color. Some notes where “key” means capability.

Grist

Granularity:
Even the right to read one bit in some file. Just create an object that holds a capability to read that file and knows the address of the bit.
Anonymous functions

2 C languages

Software Platform:

Today, there are cloud computers running software from diverse organizations with diverse goals. The common logic for cloud platforms is that you pay for average use instead of peak use. If cloud computers allowed interactions by the right rules there would be additional merits. Sometimes the programs of these organizations need to interoperate with each other. Sending encrypted packets thru Internet is slow. They could do a quick pass thru a capability kernel in about 200 instructions, to interact. Guess which is faster.

Today browsers aspire to provide a platform. The bad news is that JavaScript is not always appropriate. The good news is that the browser with its apps is portable across ISA’s.

The security and stability of the browser is no better however than that of the kernel below.

The default kernel, these days, is Linux. But in Linux I can’t play solitaire without granting the solitaire code the authority to erase all my files. The Linux security model has not evolved since Unix was designed for a building full of programmers working together in 1970.

IOS and Android have added stuff (entitlements) to the Unix model to solve a few of these problems. Adding stuff generally causes more problems than it fixes.

OK I will say it: “Capabilities are the answer.’.

With capabilities you arrive at a really small kernel that is easy to understand. There are simple patterns built upon capabilities that allow critical functions to run reliably and securely on the platform, despite bugs and viruses inhabiting the same platform. You need not banish all that code you have not read, you need merely run critical code is a safe and reliable compartment. There are degrees of criticality and safety.

Contrast

cp x y
cat < x > y
call; Bleeding Bugs, why platform, Virtualizing other platforms

This article says:

In Keykos you resume after the instruction where they interrupted you.

Today an attraction to Amazon’s cloud computers is free access to massive weather and biological data. The data is free but it is often cheaper to send your analysis program to Amazon’s computer than it is to fetch the data over Internet and store it. I wonder how many business plans are thwarted by the inability to meter data access to highly valuable data.

Flame


Bibliography: J. B. Dennis and E. C. Van Horn. Programming Semantics for Multiprogrammed Computations. Technical Report MIT/LCS/TR-23, M.I.T. Laboratory for Computer Science, 1965.