Re: poly: The singleton hypothesis

From: Nick Bostrom <>
Date: Mon May 04 1998 - 20:29:58 PDT

Peter C. McCluskey wrote:

> ("Nick Bostrom") writes:
> >A superintelligence, on the other hand, will havehave some
> >understanding of technology. So if it wanted to make as many copies
> >of itself in the same medium as itself, then it wouldn't switch to a
> >different medium -- that would be plain stupid.
> Are you implying that "some understanding" implies no mistakes, or
> do you assume it could recover from mistakes?

I think it implies that it is unlikely that it will make such a
simple mistake as giving itself goals that it does not want to have.
If there were a button that I knew would give me an irresistible
fetish for bicycles, and I don't want to have such a fetish, then I
would have to be real stupid to press that button.

> >Well, power tends to seek to perpetuate itself. For example, suppose
> >that in order to survive the singularity we constructed a dictator
> >singleton, i.e. we gave one man total power over the world (a very
> >bad choice of singleton to be sure). Then we wouldn't necessarily
> >expect that dictator to voluntarily step down when the singularity is
> >over, unless he was a person of extremely high moral standards. I
> >think the same might easily happen for other choices of singletons,
> >unless they were specifically designed to dissolve.
> I think your response to David Brin applies here:
> >When we are designing brand new volitional enteties, these enteties
> >have not evolved via Darwinian selection, so we shouldn't suppose
> >that they will place a high value on self-preservation. What has

No, it would not apply. In the example I gave here, the
dictator was a man, and men have evolved via Darwinian selection.

> I can imagine a singleton that was enough like a dictator to have
> a strong self-preservation instinct (although I haven't yet imagined
> a believable pathway to such a dictator). I suspect I would rather
> accept the problems associated with the absence of a singleton than
> accept the centralization I think is required for a self-preservation
> instinct.

Even if those problems included the extinction of intelligent life
through a nanotechnological disaster?

> >> >Yes, though I think the main parameter defining resilience to attack
> >> >might be the volume that has been colonized, and I think the
> >> >singleton and all other advanced civilizations would all be expanding
> >> >at about the same rate, close to c.
> >>
> >> I think this depends quite strongly on your assumption that the goal
> >> of remaining unified places few important constraints on a civilization's
> >> abilities.
> >
> >It follows from the assumption that it's mainly the colonized volume
> >that determines the military strength of an advanced nanopower,
> An assumption which seems to be extraordinary enough to require a bit
> of justification. Is there something fundamentally different about advanced
> nanopowers that eliminates the importance of intellectual and technological
> improvements

Well, I have a hunch that advanced nanopowers will all have achieved
close to optimal technology, for the reasons that Carl mentions. But
this is not an assumption that I need for the singleton hypothesis.

Nick Bostrom
Department of Philosophy, Logic and Scientific Method
London School of Economics
Received on Tue May 5 02:37:08 1998

This archive was generated by hypermail 2.1.8 : Tue Mar 07 2006 - 14:45:30 PST