See (p3,eggintro) for a partly obsolete introduction and motivation to the factory.
Absolute Discretion {as in the yield of FACTORYC}
This can be carried out if a DDT is built within the object and the creator communicates with DDT without passing keys.
This seems to falter on the issue of broken objects within broken objects. The debugger of the outer object may need to call upon the inner object's creator to fix the inner object. The outer object debugger, however, has only the authority of the outer object which is insufficient to connect the inner object with its creator. This is because the requestor's factory key that created the inner object will be without fix rights.
We conclude that for now the fix operation leaves the object permanently potentially structurally indiscreet {sic}.
Speculations on how to solve the general problem:
Suppose that the “fix” operation on the requestor's key returned a protected viewport to the object's state.
Suppose that this viewport were really a start key to a domain with the brand of the object and no code.
An alternative design would have allowed the .hole to be replaced by the holder of a builder's key. Since the factory creator and the builder's key holder are usually the same, this may be an unimportant distinction.
This has certain advantages. Getting a new factory with a preexisting brand is simplified - you don't have to use the copy key. Initialization of OS domains would be conceptually simpler - there could be an “OS domain” object and there would not be the problem of the domain having {and misusing} the real domain key to itself or having to invoke the OS simulator. You could just install an “OS domain creator”.
The destroy and identify functions of the domain creator can be more efficiently implemented as calls {kt+4 and ?} on the virtual domain key.
Specifying the factory's domain creator might seem dangerous for the purposes of the factory.
There are two kinds of information within an object to protect; the nature of the algorithm and the data being processed. We argue here that both are protected.
No one needs to hold the builder's key but the owner of the algorithm. Thus the result of opening the object cannot compromise that information.
The start key {or other object handle} represents the authority to access at least some portion of the coded information. The opener of the object must hold this handle. If the owner of the object decides to reveal his secrets to the builder he gives an object handle to the builder.
If, within the object, there are information barriers, the creator is responsible for their logic. This means that the object owner already must trust the creator's code to enforce that barrier. He may then trust the creator.
The attempts to refine the EGG architecture enough to implement it as part of the standard system have become dependent upon the ability to describe how discreet an object must be, particularly if it is a compound object, built out of other discreet objects.
It appears that many objects will not be absolutely discreet according to the current (7/22/81) definition used in building objects via eggs. As examples of the kind of problems, it may be necessary to trust the system administrator of a database, or to trust an admittedly indiscreet key which is part of an object.
Dilemmas about discretion
dilemma 2: EGG's are intended to make things discreet - but the current design requires that anything which is indiscrete is known to the creator, which is exactly the opposite of what is intended.
question 2: Many objects (such as a mail union) are inherently indiscreet. Is there a large enough mass of objects which still make sense if they are required to be structurally discreet.
question 3: How do you build objects which you can trust if the objects are structurally discreet.
question 4: How do you build objects which are structurally indiscreet, but for which you trust all of the indiscreet components.
question 5: How do you build objects out of components which are indiscreet, but which you still trust?
question 6: Is it necessary to develop a language to describe discretion.
question 7: Is it possible for a single egg to serve varying levels of discretion.
answer 2: We think we know how to build relatively discreet objects from a few primordial constructs which we have defined (in the egg) to be discreet.
Trusting the Factory
Except for this fluke, requestor's keys would be manifestly prompt. I suppose that a “relative discretion” query could also report whether the .program and .keeper were either requestor's keys.
This problem is about to be fixed. The new domain will call any necessary factories. If they aren't prompt, the original factory is still available.
As with discretion, the integrity of an object is limited by the integrity of its components.
The sense key provides read access to the segment aspect of an arbitrary object. Holding such a sense key is not an obstacle to discretion.
The integrity of an object, on the other hand, seems to require attention at all stages of construction.
How shall we demonstrate the solidity of an object built by FSC or VCSK? Each of these come from factories, but they come in contact with programs which might pass them destruct keys so as to become un-solid. Two ways come to mind:
The Sub-Contractor Problem
There are these basic kinds of method that I know of here:
Payment per service: either accounts {(p2,account)} or assets {(asset)}.
The limited service object seems to require some advanced planning so as not to run out in mid job. Programs are not good at such planning.
I think that we are left with some sort of loose change scheme such as described in (asset-fact).
Unfortunately this requires institutionalizing within the factory slow data channels and the loose change system. It decrees that there is just one sort of money and that the slow channel reporting periods are standard. Even with these limitations these scheme seems the best so far.
See (p2,purse) for an elaboration of this idea.
This has the advantage that no factory spec change is required. It has the disadvantage that the account specification is left to the application logic; an un-Gnosis like solution. The net effect is that untraceable deposits could be made in accounts.
This is like the 940's “stream accounting” except for the anonymity of the depositors.
The most obvious proposal for factory design modification would be to institute a counter in the complete factory that would be decremented for each object produced. The factory would be dissolved upon exhaustion of the counter. The counter could be set or incremented by the builder's key and read by the requestor's key.
Requestor's keys to such factories would, alas, not be value keys and would thus be ineligible for inclusion as components in other factories except as holes.
It seems that one should be able to install a private coin receiver which would obey an trusted algorithm but which would lead to the particular author's coffer. It is true that such the private collector would have a hole but the fact that the private object's algorithm is trusted should some warrant the inclusion upon some categorical basis.
An approach to a solution to this problem might be a modification to factory design. Here goes.
Imagine a “categorical hole”. Along with the current three circumstances under which a component can be added to a factory there would be a new circumstance. The factory would know some creator {factory or domain creator} which would vouch for the trustworthiness of objects created thereby.
When we speak of what keys there are on one side of a fence {or within an envelope} perhaps we must include within that set the keys producible therein. In particular we would include of the fetcher key producible by a factory creator's key with “recall fetcher rights”.