Re: poly: Controlling an SI?

From: Peter C. McCluskey <pcm@rahul.net>
Date: Mon Jun 01 1998 - 17:20:54 PDT

 bostrom@ndirect.co.uk ("Nick Bostrom") writes:
>Peter C. McCluskey writes:
>> The biggest problem with this is the "We could restrict" assumes much
>> more centralized control over research than I think possible. What stops
>> a lone hacker from using a different approach?
>
>I was thinking of a more narrow problem here: how to control a given
>superintelligence, rather than: how to prevent somebody else from
>building another SI. (The latter problem seems solvable in a
>singleton, and quite easily so in the early stages when relatively
>few groups were able to build a supercomputer on which the SI could
>run.)

 Yes, if you could convince me the singleton were likely, I would agree.

>Hmm.... I suspect that superintelligences' ability to deal with a
>complex law code will vastly excede our own, so adding a clause saying
>that beings below a certain intelligence threshold (higher than
>human-level) don't have property rights would seem like a manageble

 Most of us can handle a much more complex legal system than we do. There
are plenty of advantages to having a simple legal system. A legal system
which maximizes economic efficiency will probably make most intelligences
richer than a less efficient one which maximizes their fraction of the
current wealth.

>complication. Especially if humans claimed the ownership of a
>significant fraction of cosmos, or even just the solar system; then
>it would seem worth the cost for the SIs to put in this extra clause
>in their legal code.

 Yes, trying to claim ownership of things over which you have no control
can undermine your ability to defend your property rights where they matter
most.

>Of course, the extra clause would only need to be temporarily

 I suspect you underestimate the costs of achieving agreement about a
fundamental moral change.

>maintained, if the wasteful biological humans were promptly
>exterminated and their belongings expropriated.

 One alternative that someone at the Foresight Senior Associates Gathering
suggested: just upload all those humans in such a way that they aren't
aware they've been uploaded. But that requires a bigger gap between humans
and SI's than I forsee.
 As long as biological humans don't seriously interfere with a
superintelligence's ability to achieve its goals, a small aesthetic
bias built into that superintelligence may be enough to persuade it
to coexist with lesser intelligences. Reducing the costs of coexistence
is probably easier than increasing the reliability of it's moral code.

-- 
------------------------------------------------------------------------
Peter McCluskey          | Critmail (http://crit.org/critmail.html):
http://www.rahul.net/pcm | Accept nothing less to archive your mailing list
Received on Tue Jun 2 00:26:22 1998

This archive was generated by hypermail 2.1.8 : Tue Mar 07 2006 - 14:45:30 PST