This attempts a self contained introduction to enough factory ideas to dispel the idea that confinement is impossible.
If you and I run our code in the same computer but you do not trust me, you may use capability discipline to protect yourself against me.
In a few words your stuff is safe against my code because my code cannot fabricate capabilities to your stuff, even if it knew the correct bits, which it doesn’t.
Another necessary observation is that my code won’t receive capabilities to your stuff unless you send them to me.
You will presumably not do this because you wrote (or trust) all of the code that deals with capabilities to your stuff.
Before the factory idea the above usage patterns sort of summarized the common security ideas for capabilities.
The factory idea might be described in the same style as above, as follows.
There is a way for you to run my code, which you don’t trust, with your secrets so that you know that my code can’t send your secrets to me.
I must cooperate in this by packaging my code so that you may know your secrets are safe from me.
If I do this it will be clear to you that I have done so.
Further it does not require your reading my code, which I indeed may object to.
To cooperate in this manner, we must both trust the factory’s behavior.
- How is it that my code cannot send those secrets to my part of the machine where I can see them?
- It is because my code, while it sees your secrets, holds no capabilities to my part of the machine, nor can it fabricate them, even if it knew the correct bits, which it doesn’t.
- Why can I not send my code such capabilities?
- Because I have no capabilities via which to send such a message.
The factory creates the object you will use.
Had I created the object that works on your secrets, I could have retained a capability to send messages to it.
- What keeps me from adding holes to the factory after my code is working on your secrets?
- The capability you use to create an object to process your secrets, is created with an operation called sealing the factory.
Sealed factories never accept more holes.
- Can I examine a list of the capabilities that your program possesses.
- Well no, that might reveal my secrets, but I hope that you will trust the factory to have done just that.
Such capabilities are called components.
The factory examines each component that my code can fetch as it works on your secrets.
See what components the factory will admit.
- Why can my code not pick up a capability to me from the universal public registry?
- Because my confined code has no capability to any such registry.
Even systems in which there is a public directory to mail boxes for each user, the confined code lacks a capability to this directory.
- You sent me a key that you claim is a factory requestor’s key.
How do I know that that key is indeed a factory key and that I should therefore trust its behavior?
- You should send that key to a factory that you got directly when your user account was established.
Your own factory instance will vouch for the one I made.
If you don’t trust the person from whom you got you user name-password, then you shouldn’t be keeping your secrets on the machine!
A very legitimate objection at this point is that it is not clear that software can be plugged together according to capability discipline that allows application development and deployment.
Here is a brief description of a shell for such a system.
The ideas above suggest mutually trusted function to take my code and package it with your data so as to allow the computation while protecting our respective interests.
This is indeed the case.
There was already an unstated assumption that we both trusted the hardware and operating system.
The Keykos factory is normal user code that sits on a capability system and, if mutually trusted, performs these services and just a few more.