Re: poly: The singleton hypothesis

From: Nick Bostrom <>
Date: Sun May 03 1998 - 08:09:02 PDT

Robin Hanson wrote:

> The more totalitarian a singleton is likely to be, the harder people
> would fight to prevent its appearance. You might be able to convince
> people to fear the dangers of nanotech, nukes, or whatever enough to
> support some form of limited world government. But you're talking
> about a power so paranoid totalitarian that it would isolate all humans,
> uploads, and any creature without simple loyal goals, in prison/work camps,
> without any chance of access to important things, for fear they might
> someday revolt. I can't imagine a less attractive option to advocate.
> If they thought it a likely outcome, just about any likely dominant
> human power would work very hard to avoid it. So I find it hard to
> see how a human-run nanotech transition could create such a singleton
> outside the most disastorous unforseen accident.

I think we talking about several issues here. One question is: does
it seem plausible that future technologies will make possible the
construction of a singleton that is "eternally" stable? To this
question I answer yes. When arguing for this position, it is ok to
refer to an evil dictator-singleton that had little chance of being

Another, more interesting question is: Will a singleton actually
form? I hypothesized that, given that we manage to avoid nanotech
doom, a singleton is a likely possibility. A singleton (at least a
temporary one) might even be the only feasible way of avoiding a
disaster. If so, then people ought to be willing to pay even a large
cost in terms of (temporarily) reduced efficiency in order to
increase the chances of avoiding extinction. Whether enough people
would be rational enough to understand this is an open question, but
if it is indeed rational then I would think there is a real chance of
building up enough support for the idea among the people who will
have control over the first assemblers.

But is it true that a singleton necessarily comes at a great cost in
terms of efficiency? That is not obvious to me. I think it is
possible that the singleton may *increase* efficiency rather than
decrease it. For example, by avoiding burning up most of the
universe's resources in a cosmic colonization race we would seem to
increase our efficiency a lot.

Another cost that we need to consider is the cost for those people
who found the singleton. In one example I gave (a lower bound -- I
think we can design much better singletons than that), the founding
humans would give up their direct access to the physical world, at
least temporarily. This might seem a large price to pay, but if the
alternative is obliteration, they might be willing to pay it. And as
a compensation, they would get to share (toghether with their
children and other creations) all the resources that the singleton
will acquire as it colonizes space. The founders could program the
singleton to use all surplus to construct computers that
would provide an increasing Lebensraum for the uploads an AI:s. It
doesn't appear accurate to call this a prison/work camp.

We can take the proposal a step further and begin to think about how
we can include some direct access to the physical world for the
singleton-members. I think that is possible. One thing we could do
would be to have robot proxies that could operate in the outside
world and that would be controlled by the virtual creatures through
democratic decisions. That appears quite safe. If we want to give the
inhabitants a chance to persue their own projects in physical
reality, we could have a system where they would propose their
projects to a review board of superintelligences that would approve
it if they thought it was safe enough.

I think that by using our creativity we can find ways around most of
the singleton's appearent weaknesses.

Nick Bostrom
Department of Philosophy, Logic and Scientific Method
London School of Economics
Received on Sun May 3 14:16:46 1998

This archive was generated by hypermail 2.1.8 : Tue Mar 07 2006 - 14:45:30 PST