carlf@alum.mit.edu (Carl Feynman) writes:
>My intuition is that given trillions of Jupiter brains working for billions
>of years, there will be no room left for technological or intellectual
>improvements: the limits of feasibility will be coincident with the limits
>of possibility. We will reach the highest possible level of development,
>and so will any aliens we run into.
This seems like a reasonable guess for brains that are unconstrained by
goals orthogonal to survival. It's combining the requirement that the
singleton have different goals that triggers my disbelief.
bostrom@ndirect.co.uk ("Nick Bostrom") writes:
>Peter C. McCluskey wrote:
>
>> bostrom@ndirect.co.uk ("Nick Bostrom") writes:
>> >A superintelligence, on the other hand, will havehave some
>> >understanding of technology. So if it wanted to make as many copies
>> >of itself in the same medium as itself, then it wouldn't switch to a
>> >different medium -- that would be plain stupid.
>>
>> Are you implying that "some understanding" implies no mistakes, or
>> do you assume it could recover from mistakes?
>
>I think it implies that it is unlikely that it will make such a
>simple mistake as giving itself goals that it does not want to have.
It is very hard to create completely unambiguous specifications of goals.
I think that if it is possible to create this singleton before widespread
use of nanotech, it will just barely be possible. Adding the stipulation
that the creators will also spend whatever time and thought is needed
to design the perfect or a near perfect set of goals into it reduces
the credibility of your hypothesis much further.
>> I can imagine a singleton that was enough like a dictator to have
>> a strong self-preservation instinct (although I haven't yet imagined
>> a believable pathway to such a dictator). I suspect I would rather
>> accept the problems associated with the absence of a singleton than
>> accept the centralization I think is required for a self-preservation
>> instinct.
>
>Even if those problems included the extinction of intelligent life
>through a nanotechnological disaster?
Not if I knew that a decentralized approach had a 100% chance of
causing the extinction of intelligent life.
I expect that under most realistic conditions, there will be a good
deal of uncertainty regardless of whether we try to create a singleton.
It isn't obvious that trying to create a singleton will make the world
safer. I can imagine an attempt to create a singleton that almost succeeds,
but that disputes over how to specify its goals polarizes society enough
to create warfare that wouldn't otherwise happen.
bostrom@ndirect.co.uk ("Nick Bostrom") writes:
> Robin Hanson writes:
>> But that is exactly one of the strongest standard arguments usually
>> invoked for good government.
>
>What I meant was that if the traditional debate was about whether
>there should be a large government or a small government (--should
>the government own the railroad companies?), then that debate seems
>irrelevant to whether a singleton is desirable, since a singleton is
>equally compatible with both positions.
Another typical claim made by the "trust the government" advocates.
"The income tax will never be raised to 10%." "Social security numbers
will never be used as a national id system."
If neither the singleton nor its components had any self-preservation
instinct, I might be willing to believe it.
-- ------------------------------------------------------------------------ Peter McCluskey | Critmail (http://crit.org/critmail.html): http://www.rahul.net/pcm | Accept nothing less to archive your mailing listReceived on Fri May 8 16:24:25 1998
This archive was generated by hypermail 2.1.8 : Tue Mar 07 2006 - 14:45:30 PST