Re: poly: The singleton hypothesis

From: CurtAdams <CurtAdams@aol.com>
Date: Fri May 08 1998 - 20:59:33 PDT

In a message dated 5/8/98 8:15:55 PM, bostrom@ndirect.co.uk wrote:

>If initially we don't know how to reliably program a
>superintelligence to have exactly the goals we want it to have,

If? How could we possibly know how to program something vastly
more intelligent than us that has never been built to function
in a world totally different from ours?

>then
>we might choose to put the decision making authority with a group of
>people, perhaps a (world?) government, until we know how to transfer
>the right values to a computer. During this intermediary
>period there would be the usual possibility of intrigue and
>corruption that could endanger the newborn singleton.

And a virtual certainty that said singleton would work only for
the benefit of said group of people or government.

I think we'd have better chances with the grey goo.
Received on Sat May 9 04:01:55 1998

This archive was generated by hypermail 2.1.8 : Tue Mar 07 2006 - 14:45:30 PST