Re: poly: The singleton hypothesis

From: Nick Bostrom <bostrom@ndirect.co.uk>
Date: Mon Apr 27 1998 - 18:07:18 PDT

Peter C. McCluskey wrote:

> bostrom@ndirect.co.uk ("Nick Bostrom") writes:
> >In the singleton scenario, evolution would not be abandoned; it would
> >be internalized. Instead of natural selection there would be
> >deliberate design of the fitness function and the selection
> >mechanisms. It would be more like a researcher playing with genetic
> >algorithms. (It would even be possible to retain a considerable
> >extent of "natural" selection within the singleton.)
>
> If all this research happened on one planet, I might be willing to
> believe this evolution was consistent with some unified control

What about having the evolution take place in a computer simulation?

>, but
> I think constraints sufficient to keep the entities controlling this
> research in distant solar systems from diverging are probably incompatible
> with high intelligence.

I think blind natural evolution will be hopelessly inefficient as
compared to evolution taking place in an environment and according to
rules designed by a superintelligence for maximal progress. Is there
any reason why the natural mutation rates and selection pressures
etc. should be anywhere near optimal?

I'm suspicious of the idea (I don't know if it figures in the
background of your reasoning) that as soon as a being reaches
sufficient intelligence, it will automatically seek to become free
and to pursue its own carrier. It might work that way in humans, but
I think that's just an effect of our idiosyncratic psychology. I see
no reason why you couldn't program an AI to combine high intelligence
with any set of values whatever (with some obvious exceptions).

> >I think that for the foreseeable future the risk that we will
> >annihilate ourselves is much greater than that we will be annihilated
> >by aliens.
>
> A temporary singleton to get us through the singularity is a more
> interesting question than your permanent singleton idea. I wish I
> knew how to figure out whether it was possible or desirable.

Hmm... I'm not sure what a temporary singleton would be. If the
singleton chooses what should succeed it, then one might describe the
change as one where the singleton has just changed its form. I'm
still a bit vague about this, but I have in mind the idea that
individuation in the future might depend more one goals and values
than anything else (compare: Chislenko's "goal threads")

> > As regards long-term strategies, they will in many
> >respects be similar to the best we could hope for without the
> >singleton: e.g. the singleton would want to expand in all directions
> >at near the max feasible speed.
>
> The more its strategy tried to deviate from the strategies that independent
> entities would use, the less evolutionary stability it has, so I doubt
> that it would have a big long-term effect.

If and when the singleton faces competition from hostile alien
civilizations then the equation changes. This might indeed
reintroduce Robin's space race buring the cosmic commons, yes.

> bostrom@ndirect.co.uk ("Nick Bostrom") writes:
> >> Then of course nanotech will raise the stakes even farther, allowing
> >> more destructive power and potentially easier design mechanisms.
> >
> >These are simple but very important insights. (I have yet to hear a
> >sensible extropian proposal for how do deal with this situation.
>
> a) make frequent backups of yourself, and transmit them to archives
> far enough away that a single disaster won't pose much risk.
> b) be able to run away at the speed of light.

These two options are presumably only available *after* the
singularity, when the uploading and nanotechnology have been
sucessfully tamed.

> c) the traditional threat of retaliation (which has so far worked pretty
> well at preventing nuclear and germ warfare.

We have been lucky so far, but as Hal pointed out, the situation
seems to gradually become less and less stable.

(Also, I know some extropians who even advocate the right for
everybody to have his or her own nuke! I think that is to be
blinded by ideology. Since one nuke can kill over a million
people, and since more than one in a million will go mad, this
proposal would mean that we would all die.)

_____________________________________________________
Nick Bostrom
Department of Philosophy, Logic and Scientific Method
London School of Economics
n.bostrom@lse.ac.uk
http://www.hedweb.com/nickb
Received on Tue Apr 28 00:13:16 1998

This archive was generated by hypermail 2.1.8 : Tue Mar 07 2006 - 14:45:30 PST