Re: poly: Controlling an SI?

From: Nick Bostrom <bostrom@ndirect.co.uk>
Date: Fri May 22 1998 - 19:03:58 PDT

Peter C. McCluskey writes:

> >Would this method -- severely restricting its output channels -- be a
> >reliable way of ensuring long-term control over the superintelligence?
>
> The biggest problem with this is the "We could restrict" assumes much
> more centralized control over research than I think possible. What stops
> a lone hacker from using a different approach?

I was thinking of a more narrow problem here: how to control a given
superintelligence, rather than: how to prevent somebody else from
building another SI. (The latter problem seems solvable in a
singleton, and quite easily so in the early stages when relatively
few groups were able to build a supercomputer on which the SI could
run.)

> >Controlling a superintelligence by trade
> >
> >Why pay for something you can get for free? If the superintelligence
> >had the power to control all matter on earth, why would it keep such
> >irksome inefficiencies as humans and their wasteful fabrics? Only if
> >it had a special passion for humans, or respect for the standing
> >order, for we would certainly lack any instrumental value for a
> >superintelligence. The same holds for uploads, though the cost of
> >maintaining us in that form would be much smaller, and we could avoid
> >some limitations in our biological constitution that way.
>
> Multiple superintelligences are likely to use property rights
> to control scarce resources.

(If there will be multiple competing superintelligences... The
singleton scenario allows this possibility but it's also
compatible with the opposite assumption.)

> The best way to implement such rights is likely to use simple rules
> such as "whatever has the key to X owns X", e.g. disk storage space is
> likely to be owned by whatever entity knows the password to the account
> associated with that space. Trying to add rules distinguish between
> intelligent owners and stupid ones creates much complexity, and as long
> as there is a continuum of intelligence levels the majority are unlikely
> to be confident that they would fall on the desirable side of an intelligence
> threshold if one were established.
> A superintelligence that tries to get around the basic rules of an operating
> system risks being exiled by its peers because it is untrustworthy. This
> isn't all that different from the reasoning that prevents you and me from
> stealing from mentally incompetent humans.

Hmm.... I suspect that superintelligences' ability to deal with a
complex law code will vastly excede our own, so adding a clause saying
that beings below a certain intelligence threshold (higher than
human-level) don't have property rights would seem like a manageble
complication. Especially if humans claimed the ownership of a
significant fraction of cosmos, or even just the solar system; then
it would seem worth the cost for the SIs to put in this extra clause
in their legal code.

Of course, the extra clause would only need to be temporarily
maintained, if the wasteful biological humans were promptly
exterminated and their belongings expropriated.

> This reasoning works best when dealing with a variety of uploaded humans
> and other digital intelligences. It probably fails if there is just one
> superintelligence dealing with biological humans.

Yes, and I tend to think that this situation, with many co-existing
superintelligences, uploads, AIs and biological humans will only
arise if the first superintelligece to be created does not take over
the world. So even if it were a stable state, which is doubtful, how
do we get there except by first solving the problem (of how to
control an SI) in another way.

> >It is not at all reasonable to suppose that the human species, an
> >evolutionary incident, constitutes any sort of value-optimised
> >structure, -- except, of course, if the values are themselves
> >determined by humans; and even then it is not unlikely that most of us
> >would opt for a gradual augmentation of Nature's work such that we
> >would end up being something other than human. Therefore, controlling
> >by trade would not work unless we were already in control either by
> >force or value selection.
>
> I don't understand this - the last sentence doesn't have any obvious
> connection to the prior one.

You are right, that paragraph is not coherent. I pasted some old
notes into my messege and that passage ought to have been left out.

> >1. By creating a variety of different superintelligences and
> >observing their ethical behaviour in a simulated world, we should in
> >principle be able to select a design for a superintelligence with the
> >behavioural patterns that we wish. -- Drawbacks: the procedure might
> >take a long time, and it presupposes that we can create VR situations
> >that the SI will take to be real and that are relevantly similar to
> >the real world.
>
> The longer it takes, the more people will be tempted to bypass this
> procedure.

Yes.

> >2. If we can make a superintelligence that follows instructions, then
> >we are home and safe, since a superintelligence would understand
> >natural language. We could simply ask it to adopt the values we
> >wanted it to have. (If we're not sure how to express our values, we
> >could issue a meta-level command such as "Maximize the values that I
> >would express if I were to give the issue a lot of thought."
>
> The only way that I can imagine it getting enough data about human
> values to figure out what would happen if I gave the issue more thought
> than I actually have would involve uploading me. If uploading is possible
> at this stage, then uploading is a better path to superintelligence than
> the others you've considered.

Maybe uploading will first be made possible *by* a superintelligence.
In that case, we need to control the SI first, before we can upload.
We need to cause the SI to upload us.

You have a good point though. It is possible that uploading will be
feasible before superintelligence (or that we could choose to
delay superintelligence until we have mastered uploading). Then
one possible way to produce an SI that were under our control would
be to upload and augment ourselves until we ourselves bacome
superintelligent. That seems like a quite attractive option actually.

_____________________________________________________
Nicholas Bostrom
Department of Philosophy, Logic and Scientific Method
London School of Economics
n.bostrom@lse.ac.uk
http://www.hedweb.com/nickb
Received on Sat May 23 01:11:27 1998

This archive was generated by hypermail 2.1.8 : Tue Mar 07 2006 - 14:45:30 PST