Re: poly: The singleton hypothesis

From: Peter C. McCluskey <pcm@rahul.net>
Date: Mon Apr 27 1998 - 11:26:38 PDT

 bostrom@ndirect.co.uk ("Nick Bostrom") writes:
>In the singleton scenario, evolution would not be abandoned; it would
>be internalized. Instead of natural selection there would be
>deliberate design of the fitness function and the selection
>mechanisms. It would be more like a researcher playing with genetic
>algorithms. (It would even be possible to retain a considerable
>extent of "natural" selection within the singleton.)

 If all this research happened on one planet, I might be willing to
believe this evolution was consistent with some unified control, but
I think constraints sufficient to keep the entities controlling this
research in distant solar systems from diverging are probably incompatible
with high intelligence.

>I think that for the foreseeable future the risk that we will
>annihilate ourselves is much greater than that we will be annihilated
>by aliens.

 A temporary singleton to get us through the singularity is a more
interesting question than your permanent singleton idea. I wish I
knew how to figure out whether it was possible or desirable.

> As regards long-term strategies, they will in many
>respects be similar to the best we could hope for without the
>singleton: e.g. the singleton would want to expand in all directions
>at near the max feasible speed.

 The more its strategy tried to deviate from the strategies that independent
entities would use, the less evolutionary stability it has, so I doubt
that it would have a big long-term effect.

 bostrom@ndirect.co.uk ("Nick Bostrom") writes:
>> Then of course nanotech will raise the stakes even farther, allowing
>> more destructive power and potentially easier design mechanisms.
>
>These are simple but very important insights. (I have yet to hear a
>sensible extropian proposal for how do deal with this situation.

 a) make frequent backups of yourself, and transmit them to archives
  far enough away that a single disaster won't pose much risk.
 b) be able to run away at the speed of light.
 c) the traditional threat of retaliation (which has so far worked pretty
  well at preventing nuclear and germ warfare.

 hanson@econ.berkeley.edu (Robin Hanson) writes:
>I realize this claim seems like a consensus among certain groups,

 I see a remarkable lack of consensus. I think if you could quantify
people's predictions for the time between the first nanotech product
and the time when nanotech has caused economic growth of 2 or 3 orders
of magnitude, you'd see a rather flat, uniform distribution ranging from
hours to centuries when it seems like there ought to be something like
a bell curve. I think the illusion of a group of singularity worshipers
all thinking the same things comes from imprecise communications.

>but I still find it incredible, and share Curt's skepticism. If
>tommorow Monsanto found it had a workable simple assembler (which
>took 20 types of amino acids as inputs and could assemble any 3D array
>of such amino acids up to 1mm^3), I don't think they could build a super
>intelligence nor take over the world in a year. Nor could
>they likely keep the design secret for very long.

 Most of the pathways that I currently think are likely would not
keep the design secret for long enough to conquer the world, but I
don't see how anyone can be confident about what would happen if a
military organization developed the first general purpose assembler.

>Given how close we are and have come to world government, however,
>I do take seriously the possibility that all our solar system will
>be under a single government.

 I don't see many signs that we are close to a world government, nor
do I see much of a trend (Alexander came almost as close as Hitler did).
Things like the UN only seem to work when there happens to be a near-
consensus.

 CurtAdams@aol.com (CurtAdams) writes:
>>I realize this claim seems like a consensus among certain groups,
>>but I still find it incredible, and share Curt's skepticism. If
>>tommorow Monsanto found it had a workable simple assembler (which
>>took 20 types of amino acids as inputs and could assemble any 3D array
>>of such amino acids up to 1mm^3), I don't think they could build a super
>>intelligence nor take over the world in a year.
>
>In effect, Monsanto already has an equivalent to that assembler -
>ribosomes. And, indeed, that hasn't allowed them to make super
>intelligence or take over the world.

 Nitpick: assembling arbitrary sequences of amino acids is different
from positioning them anywhere you want.

-- 
------------------------------------------------------------------------
Peter McCluskey          | Critmail (http://crit.org/critmail.html):
http://www.rahul.net/pcm | Accept nothing less to archive your mailing list
Received on Mon Apr 27 18:28:01 1998

This archive was generated by hypermail 2.1.8 : Tue Mar 07 2006 - 14:45:30 PST