Security as Discretion plus Integrity
There are two practical problems here. First: to avoid using products indirectly founded upon private space banks that may be zapped when it is no longer recollected that some yield from the bank has been put to public use. Second: to assure that some object contains no information beyond that available to the builder at build time. The first problem can be mainly solved by tracking banks somehow. The second must mainly be sensitive to other keys that can transfer effects into the realm. A complete solution to either problem must cover all of these flaws.
We want relative invulnerability to support the injunction ideas at (inj). See (integ-ram) for some misgivings concerning excessive integrity.
What are the sources of behavior variation of an object beyond the variation of requests. I.e. can I expect a compiler to compile tomorrow what it compiles today? A compiler instance may vary having compiled a program. The compiler instance factory, however, should produce instances that provide the same output when provided with the same input.
Prior factory logic warrants that a factory yield cannot transmit (except via the key delivered to the requestor). We now want to arrange a warrant that a yield cannot receive (except by the same key).
The notes at (noise) indicate some limits to the limits that can be placed on 370 program variability. There are, however, broad circumstances where receiving noise is tolerable whereas receiving hostile signals is not. Machine builders can make the TOD clock unavailable to the user program. Behavior probability distributions can be sampled early and counted on later. If a compiler usually compiles my program today, then I can count on it usually compiling it next year. Covert path analysis needs to argue the absence of signal in the noise.
We need a constructive definition of a class of value objects with attributes much like those described at (vo).
It seems clear that we will need banks, guard keys to which are accounted for. This may be achieved by modifying the bank or by introducing an enhanced bank trusted to confess its vulnerabilities.
We will need segments to hold the code of invulnerable objects. These segments need not be unchanging but merely not receiving. (Unchanging will do but we should not be confused as to the real requirement.)
The ambiguity of “discreetness”.
This convenient ambiguity of “discreet” has also confused some of my ideas on “invulnerability”. It is the former factory property, in both cases, that is inherited by factories from their components. Non factories may have this property. It is an attribute that a factory must be able to sense in an object given a key to that object. It cannot depend on the distribution of keys to that object.
Factory logic can vouch for its yield’s discreetness only at the moment of birth. The same will be true of the new factories regarding vulnerabilities.
Let me switch terminology for a bit. Instead of talking about integrity lets talk about “vulnerability”. Vulnerability is modeled by a set of keys called “exposures” that can damage us and whose distribution the vulnerability measuring mechanisms cannot vouch for. We argue that vulnerability is a cumulative function of these exposures.
An object may fail to do what is required either because its defining code has a bug or because some subcontractor fails or because some key (an exposure) has been invoked. An object’s exposures thus include the exposures of its subcontractors. The factory mechanism can track these exposures just as it tracks holes.
Note that factory components are holes unless they are immutable. They are also exposures unless they are immutable.
Note that exposures are anonymous just as holes are. This solves some security problems wherein the nature of the holes or exposures may be classified. We must find a Franz Kafka quote here!
Other kinds of fake pages and nodes
Perhaps this same ruse works to produce discreet record collections wherein one record record collection instance updates a d-tree of d-nodes and d-pages and other collection instances consult the real tree. In such a system, factory requestor keys can be included because the d-node can test the discretion.
The fuzzy idea is that there is a security TCB that comprises v-banks, banks, and factories. They would trust each other’s security assessments.
Just as an old style factory has a nascent phase when its holes can increase, so would a v-bank have such a nascent phase when holes and exposures increase and security decreases. The seal order on a v-bank would cause all v-keys that it made to disappear and return an MSK (manifest security key)to the sealer of the v-bank.
An order on a v-bank passing a v-node of v-page key would disband the v-bank and return an MSK that represents the same object as the v-node or v-page key. All
A v-bank carries a security value {which means that it specifies some specific integrity and discretion }.
A v-bank produces v-pages and v-nodes. v_pages and v-nodes refer to the v-bank that produced them and obey all page and node orders.
In particular the v-node yields a memory key upon order NODE__MAKE_SEGMENT. The domain tool will not accept a v-node nor will a domain creator accept a v-bank. V versions of the domain tool or domain creator might, perhaps, be possible. This is for further study.
The NODE__SWAP orders on v-nodes have limitations akin to limitations on installation of factory components. v-node keys, v-segment keys, v-page keys from the same v-bank may be stored. Factory keys {whose security is known to the new factory logic} may be installed. Manifestly secure keys such as DKC, DISCRIM, and perhaps SBT etc. may be installed.
In fact the limitations are the same. I suspect that there are important cases where a requester’s key must be installed in a factory as a hole and similarly I think that it is necessary to have several ways of putting a key in a v_node. Calling the v-bank passing a v-node and a key to be installed therein might be a possible design.
A new order, SEAL, on the VB takes a v-key and seals the object and returns an MSO key to it. VB and all of the v-nodes and v-pages from VB disappear at this time and only the MSO carries the security assessment. All material drawn from VB’s first bank has been returned and the underlying object (and the assessment) is constructed from VB’s second bank. Since VB is gone there are no further references to either of its two banks. An MSO’s security is manifest to security experts such as new factories or v-nodes from other VBFs. Such experts can bypass the seal and get at the underlying secure objects in order that the ultimate secure object run at native speed.
On the other hand if everyone who is interested in discretion is also interested in integrity it might be better to combine the two TCBs.
We speculate here on the uses of this particular kind of manifest security. What we want roughly is to know what depends on what. We imagine someone who is paying for space wondering about the ramifications of deleting some old “contexts”. There is a definite list of official applications on the machine to whom implicit reliability guaranties have been made.
The problem would seem to reduce to asking for each application and each of the contexts, does that application depend on that context? In this exercise we attempt to retrofit these ideas to current practice.
I believe that it is necessary to register the applications and the contexts at their creation to answer these questions. What is a registered application and what is a registered context?
Imagine a sort of graftable tree. Each context and each application would be a node of this tree. A new context intended to support applications would be registered in this tree. Perhaps it would be registered as a leaf under the node that was registered as the context switcher...???
Just as a new user’s world is the yield of a factory with known discretion he could make a context as a factory yield with an unsealed v-bank. After building and testing his product and just before releasing it to the world he would seal the v-bank and place the MSO key
Orange Book ...
Here is one simple application of the “hole-exposure” scheme. (Fancier ones to come.)
Hidden behind the scenes here there is of course some trusted agency keeping track of lease terms and such. The trusted code in even these can be minimized by installing the only zap key to the v-bank in a injunction box that will remain sealed for five years.
A simpler use of exposure logic is to buy the storage forever and pay the $20 real price for the megabyte.
The manager has a list S of functions (or groups here) that whose support is committed to the enterprise. Of the hundreds of primitive groups that once supported something he thinks that list B suffices to support S. He would like to reclaim the space used by objects not in B. He needs to test that S depends on nothing beyond B. He tests this by building a factory s whose components are the members of S and another b whose components are the members of B. An inclusion test that b’s exposures are included in those of s confirms that preserving b will preserve s and thus that objects outside b may be safely deleted. Other hypotheses may likewise be tested. A program could, in fact, automate this activity providing something like garbage collection. Some form of accountability can also be provided by knowing who depends on what.
A (merely) pregnant idea is that MS objects may be duplicated and also copied to other systems!
Practically we want it to be that when the (untrusted) .program of a factory gets its hands on a component that it can’t use it to do something different on one occasion than another. In particular it can’t modify the object designated by the component key.
If we had any “write only” objects we could allow keys to them here.
It is useful to let a factory component to be a fetch key to a node holding keys to factories. It must be known that there is no node key to that node (that might be used to place a real hole in the node). Note that this is not now allowed on discreetness grounds even though it does not in fact impair actual discretion.
Perhaps the most significant advance that we contemplate is the following extension of type of assertion: In the original factory theory components might be factories requestor keys but they could not be some kind of key to a node holding a requestor key. While we could have factories leading to factories thence leading via sensory keys to pages and nodes, our tree paths had no nodes except beyond all the factories. This led to such awkward designs as fetcher factory trees.
Perhaps we should support a discreet tree which allows access to requestor keys via nodes. The current attenuating sense key function ensures that write authority does not stem from read authority. Perhaps the same end can be supported by ensuring that there is no write authority in the tree. V-nodes won’t accept keys that transcend established hole and exposure sets. This does not entirely supplant the current design which allows read-only access to a structure X while others legitimately acquire write access from structure X. In summary current design allows read-only (discreet) access to a limited class of dynamic trees while the new design allows access to a larger class of static trees which manifest integrity as well as discreetness.
Yet to be answered is whether we can support discreetness while supporting only limited integrity. Are exposures an illegitimate source of holes? May any degree of mutation be allowed while still supporting any degree of discreetness?
A kindred question arises regarding the admission of domains that obey uncertain code in manifestly secure trees. This dilemma may require a subdivision of the integrity concept. Such a structure may change itself in a “fore ordained” way. It may destroy itself but it can presumably not do so conditioned upon information unavailable as it was constructed. It is not clear what meter it would use unless it was an exposure. I now see no good uses for such unaudited types (domains obeying uncertain code) in secure trees.
The new ideas may be best characterized as pre auditing instead of post enforcement.
...