Prototypical Rights Amplification

I wish to write a program for use by others with whom I share a computer. Yet I want to ensure perfect encapsulation so as to conceal from others everything about my program and its data but the responses produced by my program. I create an object C that creates objects oi each of which obeys my program in order to serve its individual client. Some client invokes C and C returns a capability to a new instance, o++i. The nature of my program’s task is to keep mutable private state for each client. If clients do not trust me to guard their secret states I might use a factory, but that decision does not impact this scenario.

Suppose that some client discovers a bug in my program, or more properly a misbehavior of his instance o51. He can send me the capability that he uses to access o51. That capability does not, however, allow me to examine the broken mechanisms that my code uses to represent his private state, for the abstraction mechanisms that exclude him exclude me as well, even when it is my code. As C created the client’s instance, it held capabilities to these hidden components, but as C delivered the client’s capability to o51, C discarded those inner capabilities. C is thus not in a position to help me even though it once held the capability that I need.

The capability that C delivers to the client is produced by weakening one of these inner capabilities. C could keep all such inner capabilities for subsequent debugging. In this scheme I could pass to C the client’s capability for the broken object; C could reproduce (reweaken) each of the client capabilities that it had ever handed out and recognize which of them matched the one to the broken object. C could then return the corresponding inner capability to me. (I invoke C via a special capability to C (a facet) which I retained as I created C, so that C can know that it is me requesting the breaking of abstraction.)

Few if any such retained capabilities would ever be used and this requires extra code, processing and storage.

There is a better way. With some form of synergy C can amplify a client’s capability to recover the stronger capability which was originally weakened to produce the client’s capability. What ever this form of synergy, it must allow C to reserve to itself the ability to amplify the weakened capabilities that it hands out. In the case of Keykos either:

Ramifications

For either of these schemes I would have to invoke C via my private facet to C so that others would be unable to open these objects and thus break abstraction. I want to reserve for myself the ability to break the abstractions that I created.

If the clients do not trust me with their secrets they may require that their source of objects, C, be a factory. In this case they may decide not to reveal their secret state to me by passing to me their capability for their instance. There is then no way for me to see or debug their state.

Another application of amplification is a counter resetter. In Scheme rscnts is a source of resettable counters. (rscnts) yields a pair (cs . resetter) where cs is a counter source and resetter is a function whose side effect is to reset the counter passed to it. This function only works on the yields of the corresponding cs that it was born with.

Note that the an ability to amplify rights is somewhat closely held and also applies only to certain selected capabilities. I recall that in Hydra amplification occurred as an automatic consequence of an object receiving in a message, a capability for something that the receiver had the natural right to examine. In Keykos, by contrast, the ability to break the abstraction is itself an explicit capability.

Rights amplification is a form of Synergy.


The above ideas were in place as the factory was invented. Today, in 2007, Jonathan Shapiro raised a plausible scenario which should be considered.

Suppose that I, the programmer of some class of objects, delegate to X the job of debugging a broken instance of an object of my class. I do this by opening the instance and giving the domain key to X. X has read access to the code, but I do not give X write access to the code for that would allow him to corrupt all instances of the class. By ‘the code’ I mean here the binary form that all of these instances obey. Unless I have used the (unimplemented) fort mechanism, I may have retained write access to this code. If I should be corrupted while I still retain access to the system(s) that run my code, I am in a position to corrupt all of the instances without using synergy; I merely overwrite the code. The plan is for X to explain the problem with my code to me. I will fix it without X having the opportunity to corrupt these objects.

The scene for the crime is now set. The object that X is debugging is the sort that deals with its siblings. Before it broke it identified some of its siblings. Indeed it retains the opened or unopened keys to some of these siblings. Worse, perhaps, it holds permanently the authority to open any such instances that X may come across thru other means. When I delegated to X an opened object of the class, he gained access to:

Regarding the Keykos design, I conclude that possibly newly perceived dangers in debugging objects that may hold critical authority, and further reasons for implementing and using the fort.