poly: The singleton hypothesis

From: Nick Bostrom <bostrom@ndirect.co.uk>
Date: Sat Apr 25 1998 - 20:04:43 PDT

I've been thinking about a hypothesis that I think is both
interesting and quite plausible. I call it the singleton hypothesis.
In set theory, a singleton is a set that has exactly one member. That
member may itself be a set that has many members. The singleton
hypothesis is that in the future there will only be one absolutely
sovereign power. This power, the singleton, may or may not have an
internal structure. It could, for example, contain independent
societies or persons that persue their own self-interest; but these
members of the singleton will only exist because the singleton allows
them to. They may enjoy virtually unlimited freedom, but there are
certain things they can't do, such as destroy the singleton, or
destroy each other unless the singleton has deliberately given them
the power to do so. The singleton could be heaven or hell or neutral,
depending on its inner structure and what its values are; the
singleton hypothesis leaves that open.

Let me outline why I think the singleton hypothesis has
a good chance of being true (provided we manage to avoid the
doomsday scenario).

As we get closer to nanotech and superintelligence (either will
quickly lead to the other), a race will begin to get there first. The
winner of this race, the "leading force" in Drexler's terms, will
obtain total power, because a singularity-like pace of development
means that even a lead of a mere year or so will mean the difference
between all the capabilities nanotechnology cum superintelligence
will give and the lack of these capabilities. By total power, I
mean the ability to impose its decisions on other powers with
negligable cost to itself and probably little shed of enemy blood.
The leading force might, for instance, be the US government
(military), or a coalition of democratic nations, or (less probable?)
a multinational corporation.

In this situation, assuming the leading force is rational (the
probability of this being the case is increased by its having access
to superintelligence), we can predict how it will behave by
considering the costs and benefits of the alternative actions it can
take. For my purposes, the relevant dichotomy is: (A) act so as to
prevent competitors from removing the total power of the leading
force; or (not-A) not act in such a way.

It seems clear to me that for almost all practical motives that the
leading force might have, and for many ethical motives, the expected
utility is higher for A than for not-A. Let's just take one case as
an example: the leadning force might want progress and development to
continue at a high speed, but might worry that if there is just one
power, then the monopoly situation might make it complacent and thus
slow development. But even assuming that such a human psychological
affliction as complacency applies to an autopotent superintelligence,
the leading force can achieve as much fierce competition and
tooth-and-claw evolution as it likes, even without giving up its
total power. It simply recreates its competitors as internal
constructs. Perhaps it runs them as simulations; perhaps it gives
them control of a limited region of spacetime. Not only does this
have all the advantages of a "real" evolutionary battle, it adds some
benefits that the real thing cannot provide. For instence, the
leading force can run the experiment with varying initial conditions
and can adjust parameters so as to maximize the efficiency of the
process.

Given that the expected utility of A is greater than that of not-A,
the leading force will choose A. Thus it will retain total power and
be a singleton.

One conseqence if the singleton hypothesis is correct is that Robin's
game theoretic analysis of space colonization won't obtain. Game
theory is only relevant if there are more than one player. Even under
the singleton hypothesis, game theory could still be useful, since
the singleton might contain independend players. But Robin's analysis
presupposes that property rights cannot be generally enforced in
space; but a singleton could enforce property rights, and it almost
certainly would, if the alternative were an extremenly wasteful race
burning up the cosmic commons.

How could property rights be enforced over the immense distances in
intergalactic space? One solution would be to preprogram respect for
property rights into all probes. These probes would be constructed so
as to make a mutation exceedingly improbable. Since the singleton
would be the creator of all probes (or could easily retrieve any
low-tech probes that had been sent out before it became the
singleton), all probes would be preprogrammed to act in a way that
minimizes global waste and inefficiency.

Minimizing waste would be one reason for positively trying to create
a singleton (another could be safety). The important task would then
be to see to that the singleton was one that corresponded to our
values. For many of us transhumanists, this means that we want it to
respect individual freedom and self-ownership. We would want it to
guarantee our continued personal existence (for as long as we want
to live), and we would want it to give us effective control over the
new resources that the space colonization makes available. With these
resources we could do whatever we want, except things that we had
decided at the start that the singleton should prevent us from doing
-- such as massive genocide, perhaps, and avoidable globally wasteful
acts (such as a race buring the the cosmic commons).

_____________________________________________________
Nick Bostrom
Department of Philosophy, Logic and Scientific Method
London School of Economics
n.bostrom@lse.ac.uk
http://www.hedweb.com/nickb
Received on Sun Apr 26 02:11:24 1998

This archive was generated by hypermail 2.1.8 : Tue Mar 07 2006 - 14:45:30 PST