This section, and these papers, discuss and criticize many
works. We wish to emphasize that these works have been chosen, not for
their flaws, but for their value.
II.1. The Xanadu hypertext
publishing system This paper has compared agoric systems to other systems
for computation. Our first exposure to many of the central ideas of markets and
computation, however, stems from our work with the Xanadu hypertext system [23]. Xanadu is a proposed on-line publishing medium
for hypertext documents. A hypertext document differs from the normal notion of
a document in that it contains links, connections between documents
which readers can follow at the click of a mouse. Published documents thus form
not a set of disconnected islands, but a web connected by references, quotes,
criticisms, comments, and rebuttals.
How can a reader find a path in such an interconnected
web? Rather than proposing that someone (somehow) create a single, official
system index, the Xanadu project proposes support for decentralized indexing,
and hence for pluralism. Any reader can author and publish a guide to any set
of public documents. Other readers can then use this guide to sort material and
orient themselves. Anyone can, of course, publish guides to other guides.
Xanadu relies on the expectation that this activity will result in a
spontaneous order-a richly-connected world of documents in which readers can
find their way.
Why will indexing be done where needed? In part because
readers will do much of the basic searching and sorting for themselves, and
then publish the results (since publishing is easy). In addition, however,
Xanadu provides a charge-per-read royalty arrangement to encourage publication
of material for which there is a demand. Just as charge-per-use software will
make it economical to assemble software from diverse components, so Xanadu's
royalty arrangement is designed to encourage the assembly of documents from
parts of other documents: if one document quotes another, a reader's royalty
payments are split between them.
In Xanadu, documents are passive data. One way of
conceiving of agoric systems is as a publishing medium for linked, active
data.
II.2. Knowledge medium Mark
Stefik's "Knowledge Medium" paper [VII] paints
a visionary future in which AI systems, distributed across society, are able to
communicate and share knowledge. In contrast, current expert systems are seen
as isolated systems rebuilt from scratch each time. A knowledge medium would
enable individual systems to specialize in encoding the knowledge most relevant
to them, and would provide a market for the purchase of knowledge represented
elsewhere. As a result, the process of encoding knowledge is expected to
accelerate through division of labor and economies of scale.
This proposal is compatible with the agoric systems
vision, but has a somewhat different emphasis. Stefik's paper emphasizes
representing knowledge, communicating representations, and integrating
representations together. While we certainly expect (and hope) that all this
would occur in an agoric system, this work emphasizes the sale of
knowledge-based services. In Stefik's vision, a "knowledge provider" responds
to a request by sending a representation of the knowledge it specializes in.
The consumer is then faced with the task of relating this representation to its
own. This problem would create a market for "knowledge integrators". In the
model sketched in this paper, knowledge is "represented" by embodying it in
objects that apply their knowledge to provide services. Consumers would then be
integrating the results in order to provide further services.
Because of the copying problem, a market for services
should be more effective than a market for representations. Once knowledge is
transmitted, it will often spread without further rewarding its creators. This
reduces the incentives for knowledge creation.
II.3. Enterprise Net Enterprise
[VIII], by Malone, provides decentralized
scheduling of tasks in a network of personal workstations by making use of
market-like mechanisms. A client processor with a task to be scheduled
broadcasts a request for bids to contractor processors. Available
contractors respond with bids; these are evaluated by the client, which
then sends the task to the best bidder. The client's request includes
characteristics of the task which are pertinent in estimating its processing
time. The best bidder is generally the contractor who responds with the
earliest estimated completion time. This bidding protocol provides for
decentralized decision making and enables clients to use their own criteria in
evaluating candidate suppliers.
Compared to the agoric systems approach, Enterprise has
several limitations. It assumes full mutual trust between clients and
contractors, all working toward a common objective. It is also less flexible in
the tradeoffs it can make-the system contains non-adaptable system parameters
and uses no price mechanism. Lacking price signals, the system relies on
prearranged, non-evolving rules to guide behavior. The inflexibility of such a
system is illustrated by the following example.
Imagine two client tasks: a high-priority theorem proving
task and a lower-priority fluid flow simulation task, and two server machines:
a Vax 780 with an attached array processor and a Vax 750 without one. Both
tasks prefer the 780 because it is faster, but the simulation task vastly
prefers it because of the array processor; in comparison, the theorem prover is
relatively indifferent. In Enterprise, both will try to get the 780, and the
780 will be allocated to the higher priority theorem prover. In an agoric
system, however, the simulation task might offer only a trivial amount of money
for the 750, resulting in a sufficiently lower market price that the theorem
prover finds the bargain worth taking. Alternatively, if the theorem prover is
already running on the 780, the simulation task could offer to pay it to
migrate to the 750. This is but one example of the flexibility that market
prices can bring to a system. Malone acknowledges that it may be useful to
provide a price system within his framework.
II.4. Malone's comparison of
organizational structure Malone [76] has
also a compared various organizational structures for coordinating communities
of agents. A strong similarity between Malone's work and ours is the attempt to
recognize parallel organizational forms in human societies and computer
systems.
Malone sees markets as capable of providing efficient
solutions to the problems of decentralized resource allocation in computer
systems, as they have done in human organizations. He also maintains that
factors existing in human societies which limit the optimality of markets can
be excluded from software systems.
Transaction costs-such as expenses involved in trading on
the market-limit the use of markets and encourage the use of other forms of
human organization, such as hierarchies. These transaction costs increase in
uncertain or complex markets. Traders must protect themselves from other
opportunistic traders, usually by establishing contracts; negotiating such
contracts (and living with their consequences) imposes important transaction
costs.
Malone assumes that these costs will be absent from
computer systems, arguing that "While non-opportunistic traders may be rare in
human markets, there is no reason at all why computer programs cannot be
constructed with [non-opportunistic] participants in a market-like
organization." This may be so for non-evolving computational entities authored
by an individual or team. In an open distributed system, however, the programs
will themselves be authored by a diversity of people who will in fact have
opportunistic motives with respect to each other; further, EURISKO-like systems
[IX,77] may
evolve software subject only to the constraint of market success. A system
designed under the assumption of non-opportunistic participants can be
effectively used only within limited contexts-roughly speaking, within a single
firm.
II.5. Harris, Yu, and Harris's
market-based scheduling algorithm Harris, Yu, and Harris have applied
simulated markets to difficult factory scheduling problems. Although total
optimality can be defined in this case, finding it is known to be NP-hard [78], and their initial results indicate that Pareto
optimal schedules are very good by most conventional measures. In their
approach, the requirements, constraints, and tradeoffs for scheduling an
individual order are represented by a utility function. These utility functions
can express many of the "arbitrary" constraints typical of a real factory, such
as a requirement that one step follow another within a given time limit. By
having these utility functions interact to set prices, a Pareto optimal
solution is found relatively quickly by local hill climbing. "In less than a
minute [this algorithm] can schedule an order requiring 150 processing steps
over 90 resources" [78]. This system, while not
allowing for evolution of scheduling strategies, demonstrates the value of a
market model for directing resource allocation by computational means.
The representation language for expressing the
preferences of an individual order are quite flexible, but less flexible than a
general purpose programming language. This loss does confer certain advantages:
opportunistic behaviors are impossible, and the algorithm can compose
preferences via an effecient dynamic programming technique. Their algorithm
thus creates a computational market simulation, rather than a
computational market; it might find a role within a market by offering
objects a low-overhead scheduling service, guided by external market
prices.
II.6. Sutherland's time sharing
system In "A Futures Market in Computer Time" [79], I. E. Sutherland describes a bidding mechanism
(implemented in the medium of paper) that results in computer resources being
allocated according to the users' priorities. Users compete for computer time
by making bids for speci- fic blocks of time, with the bidding currency being
tokens which are assigned to users according to their relative priority. A bid
can be pre-empted by a higher bid. Since higher priority users have more tokens
to bid with, they are able to outbid the lower priority users. Being outbid, a
user might then try for a "cheaper" block of time during a less desirable
period of the day.
By having the price of a time period vary with demand,
more efficient resource allocation is possible. There are, however,
restrictions placed on the users-users cannot trade tokens or lower a bid-that
limit the flexibility of this system.
II.7. Connectionism and genetic
algorithms Two recent uses of spontaneous order principles in software are
connectionism (also known as artificial neural systems or parallel distributed
processing models) [80] and genetic algorithms
[81]. The first draws its inspiration from
models of how neural networks may operate, the second from genetically-based
biological evolution. Both systems have shown impressive abilities to learn to
recognize patterns in noisy data. Knowledge of these patterns does not have to
be designed in a priori by some human designer. Rather, these systems
are able to sift patterns from the data itself. Though this results in these
systems "knowing" the pattern, it is nowhere explicitly represented-they do not
know what patterns they know.
These systems and the agoric approach share certain
similarities. All are spontaneous order systems engaging in distributed
representation and adapting to changing circumstances in part by adjusting (and
passing around) numeric weights. Some aspects of genetic algorithms are
explicitly based on a market metaphor [82], and
Barto proposes connectionist models based on networks of self-interested units
[83].
All these systems learn (in part) by increasing numeric
weights associated with components that have contributed to overall success. A
problem that needs to be addressed by such a learning algorithm is the division
of rewards when several components have together contributed to a joint
success. Minsky writes:
It is my impression that many workers in the area of
`self-organizing' systems and `random neural nets' do not feel the urgency of
this problem. Suppose that one million decisions are involved in a complex task
(such as winning a chess game). Could we assign to each decision one-millionth
of the credit for the completed task?. . .For more complex problems, with
decisions in hierarchies. . .and with increments small enough to assure
probable convergence, the running times would become
fantastic .
Minsky wrote this in 1961 [84]. Despite the current progress of connectionism
and genetic algorithms, he still considers this criticism essentially correct
[85].
A capable learning system should be able to learn better
credit assignment mechanisms. In an agoric system, when several objects are
about to work together to produce some result, they can negotiate the division
of profits and risk. Among simple objects, and early in the evolution of an
agoric system, this negotiation might generally be handled by simple initial
strategies that may be no more flexible than the "back propagation" [80] and "bucket-brigade" [81] algorithms employed by some connectionist and
genetic-algorithm systems. As the system develops, market competition will
reward objects which employ more sophisticated negotiating strategies that
better reflect both the value derived from the various contributors, and what
their competitors are offering.
Both connectionism and genetic algorithms try to
substitute spontaneous order principles for design-individual, competing units
within such systems are not large programs designed by conventional means.
There is much to be gained both from design and evolution; the agoric systems
approach has been designed to use the strengths of both.
II.8. Summary In summary, though
the marketplace has often been used as a metaphor, it has generally not been
used as a real model-these systems are not true computational markets. Attempts
to copy patterns which have emerged in markets entail a loss of flexibility
compared with using markets themselves. This criticism is analogous to the
connectionist criticism of representationalist cognitive models [80]-that by attempting to model emergent patterns
while discarding the foundations which made them possible, representationalist
models are overly "brittle", sacrificing flexibility and learning ability.
Acknowledgments Since 1983,
when we started exploring computational markets, many people have contributed
their insights on these concepts. We thank the following for helpful
suggestions on agoric systems and these papers: Agustin Araya, Yeshayahu Artsy,
Jim Bennett, Peter Bishop, Daniel Bobrow, John Seely Brown, Andrew Cameron,
Peter Deutsch, Mike Dixon, Tom Finholt, Mike Fischer, Bob Flegal, Felix
Frayman, David Friedman, Milton Friedman, Stuart Greene, Roger Gregory, Robert
Gruber, Eric Gullicson, Ken Haase, Robin Hanson, Jed Harris, Rich Hasha, Keith
Henson, Karl Hess, Carl Hewitt, Chris Hibbert, Tad Hogg, Bernardo Huberman,
Gene Hughes, Ted Kaehler, Ken Kahn, Kirk Kelley, Scott Kim, Bill Kornfeld,
David Lindbergh, Pattie Maes, Thomas Malone, John McCarthy, Diana Merry, Marvin
Minsky, Ted Nelson, Gayle Pergamit, Alan Perlis, Chris Peterson, Harry Pyle,
Jim Rauen, Jonathan Rees, Ramana Rao, Phil Salin, Allan Schiffman, Ehud
Shapiro, Jeff Shrager, Randy Smith, Terry Stanley, Mark Stefik, Richard
Steiger, Debbie Tatar, Eric Tribble, Dave Ungar, Steve Witham, Chee Yu, and
Frank Zdybel.
For providing the ideas which inspired this work, we
thank Carl Hewitt, Marvin Minsky, Ted Nelson, Doug Lenat, Robert Axelrod,
Richard Dawkins, and most especially Friedrich Hayek.
For having arranged to make this research possible, we
thank Jonathan Schmidt, Vic Poor, Charlie Smith, Mark Stefik, the Datapoint
Technology Center, the Foresight Institute, the MIT Artificial Intelligence
Laboratory, the Stanford Artificial Intelligence Laboratory, and the Xerox Palo
Alto Research Center.
Mark S. Miller dedicates his contribution to these
papers to his uncle,
Henry I. Boreen,
who started him on the road to
intelligence.
|