Re: poly: The singleton hypothesis

From: Nick Bostrom <bostrom@ndirect.co.uk>
Date: Tue Apr 28 1998 - 19:35:27 PDT

Peter C. McCluskey writes:

> >What about having the evolution take place in a computer simulation?
>
> That solves some of the worst problems, but I still don't see how it
> could use the results to improve local parts of itself without diverging
> from the collective identity.

Why not?

> >I'm suspicious of the idea (I don't know if it figures in the
> >background of your reasoning) that as soon as a being reaches
> >sufficient intelligence, it will automatically seek to become free
> >and to pursue its own carrier. It might work that way in humans, but
>
> Intelligence may not imply this, but I think that improving an
> intelligence in response to new data (without some way to insure that
> such data is the same for all) implies changes that are inherently
> unpredictable.

Perhaps unpredictable in some dimensions, but not in others. If a
being starts out with goal A, then the only way it could switch to a
different goal B is through outside intervention or accident. (For it
would have not reason to choose to give itself the goal B if all it
cared for was goal A. I'm disregarding such special circumstances as
when it has to pass a lie-detector test etc.) Since we are assuming
that there would be no outside intervention, and since a
superintelligence would surely be able to make the probability of
accidental goal-changing arbitrarily small, it seems to follow that
the superintelligence would stick to its goal.

It might discover better ways of thinking and opertating, and it
might change all its other features, but its (meta-level) goals
(which could included allegience to the singleton) would remain
constan

> >Hmm... I'm not sure what a temporary singleton would be. If the
> >singleton chooses what should succeed it, then one might describe the
> >change as one where the singleton has just changed its form. I'm
>
> By temporary, I meant one intended to help us survive the singularity,
> without trying to plan out long-term unity.

Would that mean a singleton that was designed to dissolve when things
have stabilized? What would be the advantage of dissolving the
singleton? Think of it like this:

There are independent actors (e.g. humans) and nature decides the
conditions under which these actors interact. Then comes the
singleton and enables the deliberate design of the conditions under
which the actors interact. One can then go back to the natural state,
but what reason is there for thinking that nature is the best or
fairest framework for interaction? It seems to me it might be
advantageous, for example, to have the possibility of cosmic property
rights, something that doesn't seem possible without a singleton.

> >If and when the singleton faces competition from hostile alien
> >civilizations then the equation changes. This might indeed
> >reintroduce Robin's space race buring the cosmic commons, yes.
>
> If those other civilizations haven't constrained themselves the way
> the singleton has, it may be unsafe to wait until seeing them to
> optimize one's defensive powers.

Yes, though I think the main parameter defining resilience to attack
might be the volume that has been colonized, and I think the
singleton and all other advanced civilizations would all be expanding
at about the same rate, close to c.

> >blinded by ideology. Since one nuke can kill over a million
> >people, and since more than one in a million will go mad, this
> >proposal would mean that we would all die.)
>
> Non sequitur. One nuke can kill a million people who are concentrated
> in a city.

I know, but you get the idea.

_____________________________________________________
Nick Bostrom
Department of Philosophy, Logic and Scientific Method
London School of Economics
n.bostrom@lse.ac.uk
http://www.hedweb.com/nickb
Received on Wed Apr 29 01:41:34 1998

This archive was generated by hypermail 2.1.8 : Tue Mar 07 2006 - 14:45:30 PST