Appendix I. Issues, levels, and scale
This appendix explores how various computational issues
change character from lower to higher levels of a system (in the sense
described in Section 3.7). Agoric open
systems can most easily be developed by building up from current systems-by
finding ways to make a smooth transition from current programming practice to
the practices appropriate to market ecosystems. (One aspect of this is dealt
with in [III].) Understanding how issues will
change from level to level will aid this process and minimize the chance of
misapplying concepts from one level to problems on another level.
Higher levels of organization will raise issues not so
much of system correctness as of system coherence. For example, while a sorting
algorithm may be correct or incorrect, a large collection of software tools may
be coherent or incoherent-its parts may work together well or poorly, even if
all are individually correct. The notion of coherence presumes a level of
complexity that makes it inapplicable to a sorting algorithm. Despite the
differences between correctness and coherence, they have much in common:
correctness can be seen as a formal version of coherence, one appropriate for
small-scale objects. In this, as in many of the following issues, hard-edged
criteria at lower levels of organization have soft-edged counterparts at higher
levels.
I.1. Security
Alan Kay has characterized compatibility, security, and
simplicity as essential properties for building open systems. For mutually
untrusting objects to interact willingly, they must be secure. Encapsulation
can provide security at a low level, as a formal property of computation. With
this property, one can code an object so that the integrity of an internal data
structure is guaranteed despite possible nonsense messages. Security at a high
level involves skepticism and the establishment of effective reputation
systems. Skepticism enables an object to continue reasoning coherently despite
being told occasional lies.
Encapsulation-in this case, protection against
tampering-is necessary for skepticism to work. Without encapsulation, a
skeptical object's intellectual defenses could be overcome by the equivalent of
brain surgery.
Compatibility allows objects to be mutually intelligible,
despite diverse origins. At a foundational level, it involves a shared message
passing medium and mutual understanding of some protocol. Inside a small
program written by a single programmer, objects can be carefully crafted so
that any two that communicate will necessarily use the same protocol. Between
large objects written by different people, or the same person at different
times, checking for protocol agreement can frequently prevent disaster. For
example, if an object is passed a reference to a lookup table when it is
expecting a number, it may help to learn that "addition" will not be understood
by the table before actually attempting it. Note that this itself relies on
agreement on a basic protocol which provides a language for talking about other
protocols.
In the Xerox Network System, clients and servers not only
compare the type of protocol that they can speak, but the range of protocol
versions that they understand [65]. If their
ranges overlap, they then speak the latest mutually understood version. If
their ranges do not overlap, they then part and go their separate ways. This is
an example of bootstrapping from a mutually understood protocol to determine
the intelligibility of other protocols. The developing field of
interoperability [66] should soon provide many
more.
Sophisticated objects should eventually have still broader
abilities. Human beings, when faced with a novel piece of equipment, can often
learn to make profitable use of unfamiliar capabilities. Among the techniques
they use are experimentation, reading documentation, and asking a consultant.
One may eventually expect computational analogues [67].
I.3. Degrees of trust
Security is needed where trust is lacking, but security
involves overhead; this provides an incentive for trust. At a low level, a
single author can create a community of trusting objects. At an intermediate
level trust becomes more risky because error becomes more likely. This
encourages error-checking at internal interfaces, as is wise when a team of
programmers (or one forgetful programmer) must assemble separately developed
modules.
At higher levels, strategic considerations can encourage
partial trust. A set of objects may make up a larger object, where the success
of each depends on the success of all. Here, objects may trust each other to
further their joint effort [68]. Axelrod's
iterated prisoner's dilemma tournament [69] (see
also [I]) shows another way in which strategic
considerations can give rise to trust. One object can generally expect
cooperative behavior from another if it can arrange (or be sure of) appropriate
incentives.
In a simple iterated prisoner's dilemma game, this
requires both having a long-term relationship and paying the overhead of
noticing and reacting to non-cooperative behavior. Reputation systems within a
community can extend this principle and lower the overhead of using it. Some
objects can gather and sell information on another object's past performance:
this both provides incentives for consistently good performance and reduces the
cost of identifying and avoiding bad performers. In effect, reputation systems
can place an object in an iterated relationship with the community as a
whole.
The idea that encapsulation is needed at low levels for
security, where we also expect complete trust seems to entail a conflict. But
the function of encapsulation is to protect simple objects where trust is
limited or absent (as it will be, between some pairs of objects). Complete
trust makes sense among simple objects that are in some sense playing on the
same team.
I.4. Reasoning
Programming language research has benefited from the
methodology of formalizing programming language semantics. A result is the
ability to reason confidently (and mechanistically) about the properties of
programs expressed in such languages. This can establish confidence in the
correctness of programs having simple specifications. The logic programming
community is exploring the methodology of transforming a formal specification
into a logic program with the same declarative reading. The resulting logic
program is not generally guaranteed to terminate, but if it does, it is
guaranteed to yield a correct result, since the interpreter is a sound (though
incomplete) theorem prover and the program is a sound theorem.
Deductive logic seems inadequate as a high-level model of
reasoning, though there is much controversy about this. High level reasoning
involves weighing pro and con plausibility arguments (due-process reasoning),
changing one's mind (non-monotonicity), believing contradictory statements
without believing all statements, and so forth. There have been attempts to
"fix" logic to be able to deal with these issues, but [70] argues that these will not succeed. A more
appropriate approach to high level reasoning emphasizes coherence,
plausibility, and pluralism instead of correctness, proof, and facts. (This
does not constitute a criticism of logic programming: logic programming
languages, like lambda-calculus languages, can express arbitrary calculations,
including those that embody non-logical modes of reasoning.)
I.5. Coordination
In order to coordinate activity in a concurrent world, one
needs a mechanism for serialization. Semaphores [71] and serialized actors [IV,3,4] enable a choice between processes contending for
a shared resource; these primitives in turn make possible more complex
concurrency control schemes such as monitors [19] and receptionists [4], which allow the protected resource to interact
with more than one process at a time. Monitors in turn have been used to build
distributed abortable transactions (as in Argus, described elsewhere in this
volume [V]), which support coherent computation
in the face of failure by individual machines.
For very large distributed systems, transaction-based
coordination requires too much consistency over too many participants.
Dissemination models [38,39,72], and
publication models [23,73] provide mechanisms that apply to larger
scales.
The Colab is another project which has extended notions of
coordination control. Colab is a project to build a collaborative laboratory-a
multi-user interactive environment for supporting collaborative work [74]. In the Colab, a group of people work together
on a set of data and sometimes contend for the right to modify the same piece
of data. Initial attempts to deal with this by simply scaling up transactions
proved unsuitable. Instead, social-coordination mechanisms were found,
such as signals to indicate someone's interest in changing a piece of data. The
applicability of these mechanisms is not human-specific, but should generalize
to any situation in which there is often a significant investment in
computation which would be thrown away by an aborted transaction.
An essential aspect of higher-level coordination
mechanisms is negotiation. When allocating exclusive access to a resource for a
millisecond, it often makes sense to rely on simple serialization. When
allocating exclusive access for a year, it often makes sense to take greater
care. One simple form of negotiation is an auction-a procedure in which the
resource is allocated to the highest bidder. Hewitt in [75] explores Robert's Rules of Order as the basis
for more sophisticated negotiation procedures.
Even sophisticated negotiation mechanisms will often rely
on primitive serializers. In auctions, an auctioneer serializes bids; in
Robert's Rules, the chair serializes access to the floor.
I.6. Summary
This section has examined how a range of issues-security,
compatibility, trust, reasoning, and coordination-may appear at different
levels of market-based open systems. Certain themes have appeared repeatedly.
Mechanisms at low levels often support those at higher levels, as (for example)
high-level coordination mechanisms using simple serializers. Further, higher
levels can inherit characteristics of lower levels, such as encapsulation and
conservation laws.
Issues often blur at the higher levels-security and trust
become intertwined, and may both depend on due-process reasoning. The bulk of
this paper concentrates on low- and mid-level concerns which must be addressed
first, but high-level issues all present a wealth of important research topics.
|