Re: poly: The singleton hypothesis

From: Peter C. McCluskey <pcm@rahul.net>
Date: Mon May 04 1998 - 11:59:06 PDT

 bostrom@ndirect.co.uk ("Nick Bostrom") writes:
>A superintelligence, on the other hand, will havehave some
>understanding of technology. So if it wanted to make as many copies
>of itself in the same medium as itself, then it wouldn't switch to a
>different medium -- that would be plain stupid.

 Are you implying that "some understanding" implies no mistakes, or
do you assume it could recover from mistakes?

>(BTW, could it not also be argued that if we are to ascribe goals to
>DNA molecules, the most correct goal to ascribe might not be "make as
>many similar DNA molecules as possible", but rather "make as many
>instances of the same basic DNA information pattern as possible"? In
>the past, these two goals would have led to the same actions. In the
>future, the assuming the latter goal might imply different
>predictions. If these predictions are born out, doesn't that give us
>reason to say that ascribing the latter goal was more correct?)

 I think it's more correct to say that the goals of DNA were ambiguous.

>Well, power tends to seek to perpetuate itself. For example, suppose
>that in order to survive the singularity we constructed a dictator
>singleton, i.e. we gave one man total power over the world (a very
>bad choice of singleton to be sure). Then we wouldn't necessarily
>expect that dictator to voluntarily step down when the singularity is
>over, unless he was a person of extremely high moral standards. I
>think the same might easily happen for other choices of singletons,
>unless they were specifically designed to dissolve.

 I think your response to David Brin applies here:

>When we are designing brand new volitional enteties, these enteties
>have not evolved via Darwinian selection, so we shouldn't suppose
>that they will place a high value on self-preservation. What has

 I can imagine a singleton that was enough like a dictator to have
a strong self-preservation instinct (although I haven't yet imagined
a believable pathway to such a dictator). I suspect I would rather
accept the problems associated with the absence of a singleton than
accept the centralization I think is required for a self-preservation
instinct.

>> >Yes, though I think the main parameter defining resilience to attack
>> >might be the volume that has been colonized, and I think the
>> >singleton and all other advanced civilizations would all be expanding
>> >at about the same rate, close to c.
>>
>> I think this depends quite strongly on your assumption that the goal
>> of remaining unified places few important constraints on a civilization's
>> abilities.
>
>It follows from the assumption that it's mainly the colonized volume
>that determines the military strength of an advanced nanopower,

 An assumption which seems to be extraordinary enough to require a bit
of justification. Is there something fundamentally different about advanced
nanopowers that eliminates the importance of intellectual and technological
improvements, or are you implying that in a military conflict between, say,
India and England, we would expect India's larger volume to give it the
advantage even if India's inhabitants had the technology of chimpanzees
or beavers?

-- 
------------------------------------------------------------------------
Peter McCluskey          | Critmail (http://crit.org/critmail.html):
http://www.rahul.net/pcm | Accept nothing less to archive your mailing list
Received on Mon May 4 19:03:13 1998

This archive was generated by hypermail 2.1.8 : Tue Mar 07 2006 - 14:45:30 PST