Re: poly: The singleton hypothesis

From: Nick Bostrom <>
Date: Sat May 09 1998 - 16:47:30 PDT

CurtAdams wrote:

> >If initially we don't know how to reliably program a
> >superintelligence to have exactly the goals we want it to have,
> If? How could we possibly know how to program something vastly
> more intelligent than us that has never been built to function
> in a world totally different from ours?

What do you mean by "that has never been built to function
 in a world totally different from ours"? I think it is definitely
possible to build an artificial intellect that is smarter than
ourselves, and whether we will be able to give it the values we
choose is an open question. Do you think the two step proceedure I
outlined can't possibly work?

> >then
> >we might choose to put the decision making authority with a group of
> >people, perhaps a (world?) government, until we know how to transfer
> >the right values to a computer. During this intermediary
> >period there would be the usual possibility of intrigue and
> >corruption that could endanger the newborn singleton.
> And a virtual certainty that said singleton would work only for
> the benefit of said group of people or government.

So we better make that group inclusive enough that it contains
ourselves and everybody we care for or think are ethically justified
to have a say. We also better have institutions, proceedures, checks
and balances that enable the public to have some degree of confidence
that the actual computer programmers and their bosses don't give the
singleton other values or functions than the ones that society has
decided it should have.

> I think we'd have better chances with the grey goo.

Yes, grey goo is the easy problem. The difficulty is to avoid
deliberately designed black goo.
Nick Bostrom
Department of Philosophy, Logic and Scientific Method
London School of Economics
Received on Sat May 9 22:54:45 1998

This archive was generated by hypermail 2.1.8 : Tue Mar 07 2006 - 14:45:30 PST