Re: poly: The singleton hypothesis

From: Peter C. McCluskey <>
Date: Tue Apr 28 1998 - 17:39:14 PDT ("Nick Bostrom") writes:
>Peter C. McCluskey wrote:
>> If all this research happened on one planet, I might be willing to
>> believe this evolution was consistent with some unified control
>What about having the evolution take place in a computer simulation?

 That solves some of the worst problems, but I still don't see how it
could use the results to improve local parts of itself without diverging
from the collective identity.

>>, but
>> I think constraints sufficient to keep the entities controlling this
>> research in distant solar systems from diverging are probably incompatible
>> with high intelligence.
>I think blind natural evolution will be hopelessly inefficient as
>compared to evolution taking place in an environment and according to
>rules designed by a superintelligence for maximal progress. Is there
>any reason why the natural mutation rates and selection pressures
>etc. should be anywhere near optimal?

 I wasn't saying anything about rates of change, and assume many such
details about evolution will be changed.

>I'm suspicious of the idea (I don't know if it figures in the
>background of your reasoning) that as soon as a being reaches
>sufficient intelligence, it will automatically seek to become free
>and to pursue its own carrier. It might work that way in humans, but

 Intelligence may not imply this, but I think that improving an
intelligence in response to new data (without some way to insure that
such data is the same for all) implies changes that are inherently

>> A temporary singleton to get us through the singularity is a more
>> interesting question than your permanent singleton idea. I wish I
>> knew how to figure out whether it was possible or desirable.
>Hmm... I'm not sure what a temporary singleton would be. If the
>singleton chooses what should succeed it, then one might describe the
>change as one where the singleton has just changed its form. I'm

 By temporary, I meant one intended to help us survive the singularity,
without trying to plan out long-term unity.

>If and when the singleton faces competition from hostile alien
>civilizations then the equation changes. This might indeed
>reintroduce Robin's space race buring the cosmic commons, yes.

 If those other civilizations haven't constrained themselves the way
the singleton has, it may be unsafe to wait until seeing them to
optimize one's defensive powers.

>> c) the traditional threat of retaliation (which has so far worked pretty
>> well at preventing nuclear and germ warfare.
>We have been lucky so far, but as Hal pointed out, the situation
>seems to gradually become less and less stable.

 There's probably some destabilizing trend, but I don't think it is
as clear or strong as many fear. We have a decent record of minimizing
the use of things like nuclear and biochemical weapons.

>blinded by ideology. Since one nuke can kill over a million
>people, and since more than one in a million will go mad, this
>proposal would mean that we would all die.)

 Non sequitur. One nuke can kill a million people who are concentrated
in a city. Killing everyone in Alaska would probably take a lot more than
than one nuke per million people. And not every mad person who had a right
to own a nuke could afford one or would use those he could afford to
maximise harm. [I'm not claiming that it's wise to allow anyone to own
a nuke, merely that you have misstated the problems with such a policy.]

Peter McCluskey          | Critmail ( | Accept nothing less to archive your mailing list
Received on Wed Apr 29 00:40:56 1998

This archive was generated by hypermail 2.1.8 : Tue Mar 07 2006 - 14:45:30 PST