In describing the idea of market-based computation and
some of its implications, this paper has implicitly focused on relatively
isolated systems of software performing relatively conventional functions. The
following examines two broader issues: how market-based computation could
interact with existing markets for software, and how it could be relevant to
the goal of artificial intelligence.
An agoric open system would provide a computational world
in which simple objects can sell services and earn royalties for their
creators. This will provide incentives that differ from those of the present
world, leading to qualitative differences in software markets.
Perhaps the central problem we face in all of
computer science is how we are to get to the situation where we build on top of
the work of others rather than redoing so much of it in a trivially different
way.
----------------------R. W. Hamming, 1968 [49]
Consider the current software distribution marketplace.
Producers typically earn money by charging for copies of their software (and
put up with extensive illegal copying). Occasional users must pay as much for
software as intense users. Software priced for intense users is expensive
enough to discourage purchase by occasional users-even if their uses would be
of substantial value to them. Further, high purchase prices discourage many
potentially frequent users from trying the software in the first place. (Simply
lowering prices would not be more efficient if this lowers revenues for the
sellers: with lower expected revenue, less software would be written, including
software for which there is a real demand.)
Now consider trying to build and sell a simple program
which uses five sophisticated programs as components. Someone might buy it just
to gain access to one of its components. How large a license fee, then, should
the owners of those components be expected to charge the builder of this simple
program? Enough to make the new program cost at least the sum of the costs of
the five component programs. Special arrangements might be made in special
circumstances, but at the cost of having people judge and negotiate each case.
When one considers the goal of building systems from reusable software
components, with complex objects making use of one another's services [50], this tendency to sum costs becomes
pathological. The peculiar incentive structure of a charge-per-copy market may
have been a greater barrier to achieving Hamming's dream than the more obvious
technical hurdles.
In hardware markets, it can be better to charge for the
use of a device than to sell a copy of it to the user:
Why was [the first Xerox copier] so successful? Two
thing contributed to the breakthrough, McColough says. . .technical
superiority. . .and equally important, the marketing genius of the pricing
concept of selling [the use of the copier], not machines. `One aspect without
the other wouldn't have worked,' he said. `. . .we couldn't sell the machines
outright because they would have been too expensive.'
--------------------Jacobson and Hillkirk, 1986 [51]
Agoric systems will naturally support a charge-per-use
market for software. In any market, software producers will attempt to extract
substantial charges from high-volume users. With charge per use, however, the
charges to be paid by high-volume users will no longer stand in the way of
low-volume users; as a result, they will use expensive software that they could
not afford today. At the same time, high-volume users will experience a finite
marginal price for using software, rather than buying it and paying a zero
marginal price for using it; they will cut back on some of their marginal,
low-value uses. The overall benefit of numerous low-volume users making
high-value use of the software will likely outweigh the loss associated with a
few high-volume users cutting back on their low-value uses, yielding a net
social benefit. It seems likely that some of this benefit will appear as
increased revenues to software producers, encouraging increased software
production.
In a charge-per-copy market, users face an incentive
structure in which they pay nothing to keep using their present software, but
must pay a large lump sum if they decide to switch to a competitor. A
charge-per-use market will eliminate this artificial barrier to change,
encouraging more lively competition among software producers and better
adaptation of software to user needs.
By enabling small objects to earn royalties for their
creators, charge-per-use markets will encourage the writing, use, and reuse of
software components-to do so will finally be pro- fitable. Substantial
improvement in programming productivity should result; these improvements will
multiply the advantages just described.
This charge-per-use scenario presents a major technical
problem: it depends on the ability to truly protect software from illicit
copying. True encapsulation would ensure this, but true encapsulation will
require a hardware foundation that blocks physical attacks on security. Two
approaches seem feasible: either keeping copies in just a few secure sites and
allowing access to their services over a network, or developing a technology
for providing users with local secure sites to which software can migrate.
In the limit of zero communication costs (in terms of
money, delay, and bandwidth limitations), the disincentive for remote
computation would vanish. More generally, lower communication costs will make
it more practical for objects located on remote machines to offer services to
objects on user machines. Remote machines can provide a hardware basis for
secure encapsulation and copy protection-they can be physically secured, in a
vault if need be. This approach to security becomes more attractive if software
can be partitioned into public-domain front-ends (which engage in
high-bandwidth interaction with a user), and proprietary back-ends (which
perform sophisticated computations), and if bandwidth requirements between
front- and back-ends can be minimized.
One system that might lend itself to this approach is an
engineering service [13]. The user's machine
would hold software for the representation, editing, and display of hardware
designs. The back-end system-perhaps an extensive market ecosystem containing
objects of diverse functionality and ownership-would provide
computation-intensive numerical modeling of designs, heuristics-applying
objects (perhaps resembling expert systems) for suggesting and evaluating
modifications, and so forth.
Two disadvantages of separating front- and back-ends in
this way are communications cost and response time. If hardware encapsulation
can be provided on the local user's machine, however, software can migrate
there (in encrypted form) and provide services on-site. Opaque boxes are a
possible design for such secure hardware:
Imagine a box containing sensors and electronics able to
recognize an attempt to violate the box's integrity [52]. In addition, the box contains a processor,
dynamic RAM, and a battery. In this RAM is the private key of the
manufacturer's public-key encryption key pair [29]; objects encrypted with the public key can
migrate to the box and be decrypted internally. If the box detects an attempt
to violate its physical integrity, it wipes the dynamic RAM (physically
destructive processes are acceptable), deleting the private key and all other
sensitive data. All disk storage is outside the box (fast-enough disk erasure
would be too violent), so software and other data must be encrypted when
written and decrypted when read. The box is termed opaque because no one can
see its contents.
Internally, the opaque box would require encapsulation
among software objects. This can be done by using a secure operating system [VI], by using capability hardware [53,54,55], or by demanding that objects be written in a
secure programming language and either run under a secure interpreter or
compiled by a secure compiler [IV,56]. Among other objects, the box would contain one
or more branches of external banks, linked to them from time to time by
encrypted communications; these banks would handle royalty payments for use of
software.
Will greater hardware cost make opaque boxes
uncompetitive for personal computer systems? If the added cost is not too many
hundreds of dollars, the benefit-greater software availability-will be far
greater, for many users. Opaque boxes can support a charge-per-use market in
which copies of software are available for the cost of telecommunications.
CD-ROMs full of encrypted software might be sold at a token cost to encourage
use.
An intermediate approach becomes attractive if opaque
boxes are too expensive for use as personal machines. Applications could be
split into front and back-ends as above, but back-ends could run on any
available opaque box. These boxes could be located wherever there is sufficient
demand, and linked to personal machines via high-bandwidth local networks.
People (or software) would find investment in opaque boxes profitable, since
their processors would earn revenue. With high enough box-manufacturing costs,
this approach merges into the remote-machine scenario; with low enough costs,
it merges into the personal-machine scenario.
As society embodies more and more of its knowledge and
capabilities in software, the theft of this software becomes a growing danger.
An environment that encourages the creation of large, capable, stand-alone
applications sold on a charge-per-copy basis magnifies this problem,
particularly when the stolen software will be used in places beyond the reach
of copyright law.
A charge-per-use environment will reduce this problem. It
will encourage the development of software systems that are composites of many
proprietary packages, each having its security guarded by its creator. Further,
it will encourage the creation of systems that are distributed over many
machines. The division and distribution of functions will make the problem
faced by a thief less like that of stealing a car and more like that of
stealing a railroad. Traditional methods of limiting theft (such as military
classification) slow progress and inhibit use; computational markets promise to
discourage theft while speeding progress and facilitating use.
It has been shown how an agoric system would use price
mechanisms to allocate use of hardware resources among objects. This price
information will also support improved decisions regarding hardware purchase:
if the market price of a resource inside the system is consistently above the
price of purchasing more of the resource on the external market, then
incremental expansion is advantageous. Indeed, one can envision scenarios in
which software objects recognize a need for new hardware, lease room for it,
and buy it as an investment.
It has been shown how objects in an agoric system would
serve human needs, with human minds judging their success. Similarly, when
objects are competent to judge success, they can hire humans to serve their
needs-for example, to solve a problem requiring human knowledge or insight.
Conway's law states that "Organizations which design
systems are constrained to produce systems which are copies of the
communications structures of these organizations" (from [57] as quoted in [58]). If so, then software systems developed in a
distributed fashion can be expected to resemble the organization of society as
a whole. In a decentralized society coordinated by market mechanisms, agoric
systems are a natural result.
Artificial intelligence is unnecessary for building an
agoric open system and achieving the benefits described here. Building such a
system may, however, speed progress in artificial intelligence. Feigenbaum's
statement, "In the knowledge lies the power", points out that intelligence is
knowledge-intensive; the "knowledge acquisition bottleneck" is recognized as a
major hindrance to AI. Stefik has observed [VII] that this knowledge is distributed across
society; he calls for a "knowledge medium" in which knowledge contributed by
many people could be combined to achieve greater overall intelligence.
Agoric systems should form an attractive knowledge
medium. In a large, evolving system, where the participants have great but
dispersed knowledge, an important principle is: "In the incentive structure
lies the power". In particular, the incentives of a distributed, charge-per-use
market can widen the knowledge engineering bottleneck by encouraging people to
create chunks of knowledge and knowledge-based systems that work together.
Approaches based on directly buying and selling knowledge
[VII,23] suffer
from the peculiar incentives of a charge-per-copy market. This problem can be
avoided by embodying knowledge in objects which sell knowledge-based services,
not knowledge itself. In this way, a given piece of knowledge can be kept
proprietary for a time, enabling producers to charge users fees that approach
the value the users place on it. This provides an incentive for people to make
the knowledge available. But in the long run, the knowledge will spread and
competition will drive down the price of the related knowledge-based
services-approaching the computational cost of providing them.
Agoric open systems can encourage the development of
intelligent objects, but there is also a sense in which the systems themselves
will become intelligent. Seeing this entails distinguishing between the idea of
intelligence and the ideas of individuality, consciousness, and will.
Consider the analogous case of human society.
It can be argued that the most intelligent system now
known is human society as a whole. This assertion strikes some people as
obvious, but others have a strong feeling that society should be considered
less intelligent than an individual person. What might be responsible for these
conflicting views?
The argument for the stupidity of society often focuses
not on the achievements of society, but on its suboptimal structure or its slow
rate of structural change. This seems unfair. Human brains are presumably
suboptimal, and their basic structure has changed at a glacial pace over the
broad time spans of biological evolution, yet no one argues that society is
worse-structured than a brain (what would this mean?), or that its basic
structure changes more slowly than that of a brain. Great intelligence need not
imply optimal structure, and suboptimal structure does not imply stupidity.
Other arguments for the stupidity of society focus on the
behavior of committees, or crowds, or electorates. This also seems unfair.
Human beings include not only brains but intestines; our intelligence is not to
be judged by the behavior of the latter. Not all parts need be intelligent for
a system to be so. Yet other arguments focus on things individuals can do that
groups cannot, but one might as well argue that Newton was stupid because he
did not speak Urdu. A final argument for the stupidity of society focuses on
problems that result when a few individuals who are thought to somehow
represent society attempt to direct the actions of the vast number of
individuals who actually compose society-that is, the problems of
central planning, government, and bureaucracy. This statement of the argument
seems an adequate refutation of it.
The argument for society's intelligence is simple: people
of diverse knowledge and skills, given overall guidance by the incentives of a
market system, can accomplish a range of goals which, if accomplished by an
individual, would make that individual a super-human supergenius. The computer
industry is a small part of society, yet what individual could equal its
accomplishments, or the breadth and speed of its ongoing problem-solving
ability?
Still, it is legitimate to ask what it means to speak of
the "intelligence" of a diverse, distributed system. In considering an
individual, one commonly identifies intelligence with the ability to achieve a
wide range of goals through complex information processing. But in agoric
systems, as in human society, the component entities will in general have
diverse goals, and the system as a whole will typically have no goals [59]. Nonetheless, a similar concept of intelligence
can be applied to individuals, societies, and computational markets.
Individuals taking intelligence tests are judged by their
ability to achieve goals set by a test-giver using time provided for the
purpose. Likewise, the intelligence of a society may be judged by its ability
to achieve goals set by individuals, using resources provided for the purpose.
In either case, the nature and degree of intelligence may be identified with a
combination of the range of goals that can be achieved, the speed with
which they can be achieved, and the efficiency of the means employed. By this
measure, one may associate kinds and degrees of intelligence not only with
individuals, but with corporations, with ad-hoc collections of suppliers
and subcontractors, and with the markets and institutions that bring such
collections together at need. The idea of intelligence may thus be separated
from the ideas of individuality, consciousness, and will.
The notion of intelligence emerging from social
interactions is familiar in artificial intelligence: Minsky [60] uses the society metaphor in his recent work on
thinking and the mind; Kornfeld and Hewitt [61]
use the scientific community as a model for programs incorporating due process
reasoning. Human societies demonstrate how distributed pieces of knowledge and
competence can be integrated into larger, more comprehensive wholes; this
process has been a major study of economics [8]
and sociology [63]. Because these social
processes (unlike those in the brain) involve the sometimes-intelligible
interaction of visible, macroscopic entities, they lend themselves to study and
imitation. This paper may thus be seen as proposing a form of multi-agent,
societal approach to artificial intelligence.
|