Re: poly: Controlling an SI?

From: Nick Bostrom <bostrom@ndirect.co.uk>
Date: Mon Jun 01 1998 - 20:28:20 PDT

We are discussing whether, in a society where there are several free
AI:s and superintelligences around, the superintelligences would need
to have human-friendly values in order for the humans to survive, or
whether pragmatic reasons alone would suffice to motivate the
superintelligences to preserve us.

Peter C. McCluskey wrote:

> >Hmm.... I suspect that superintelligences' ability to deal with a
> >complex law code will vastly excede our own, so adding a clause saying
> >that beings below a certain intelligence threshold (higher than
> >human-level) don't have property rights would seem like a manageble
>
> Most of us can handle a much more complex legal system than we do. There
> are plenty of advantages to having a simple legal system. A legal system
> which maximizes economic efficiency will probably make most intelligences
> richer than a less efficient one which maximizes their fraction of the
> current wealth.

Why do you think that a legal system that included the extra clause
(saying that human-level intelligences lack don't have rights) would
be less economically efficient than one which didn't? Consider:
(1) Once the humans are annihilated, nobody needs to think about the
law any longer.
(2) A line has to be drawn somewhere. Why is it inefficient to draw
it between humans and superintelligences, but not between, say,
slightly uplifted animals and humans?
(3) Maybe superintelligences can manage their capital more
efficiently than somewhat irrational humans.

> >complication. Especially if humans claimed the ownership of a
> >significant fraction of cosmos, or even just the solar system; then
> >it would seem worth the cost for the SIs to put in this extra clause
> >in their legal code.
>
> Yes, trying to claim ownership of things over which you have no control
> can undermine your ability to defend your property rights where they matter
> most.

And if humans are effectively powerless and don't have
direct "control" over anything?

> >Of course, the extra clause would only need to be temporarily
>
> I suspect you underestimate the costs of achieving agreement about a
> fundamental moral change.

Maybe. What does this cost consists in, and why do you think that it
will be big?

> >maintained, if the wasteful biological humans were promptly
> >exterminated and their belongings expropriated.
>
> One alternative that someone at the Foresight Senior Associates Gathering
> suggested: just upload all those humans in such a way that they aren't
> aware they've been uploaded. But that requires a bigger gap between humans
> and SI's than I forsee.

I think it might well be technologically possible, but it would still
imply coercion, since at least some of the original humans would
presumably have prefered to live in physical reality. Still, for us
humans at least, it seems like a much better outcome than
annihilation. If the singleton cares about human preferences (which
we will hopefully be able to make it do), then it would presumably
rather upload us than annihilate us. However, why stop there? Why
would the singleton not make an even greater effort to satisfy human
preferences by allowing those who so wish to continue to live in
physical reality (under some surveilance so they don't endanger the
singleton). It would cost somewhat more, but might still amount to a
very small fraction of the singleton's capital once space
colonization gets off the ground. Given that it is ethical in this
sense at all, chances are it's degree of "ethicality", as measured
by the fraction f of its capital that it devotes to humans, won't be
in the small interval given by

U<= f * (total capital) <=PR,

where U is the cost of keeping all humans uploaded, and
PR is the cost of also keeping those humans who so wish in physical
reality.

> As long as biological humans don't seriously interfere with a
> superintelligence's ability to achieve its goals, a small aesthetic

(or ethical, or sentimental--)

> bias built into that superintelligence may be enough to persuade it
> to coexist with lesser intelligences.

Yes.

>Reducing the costs of coexistence
> is probably easier than increasing the reliability of it's moral code.

Note that reducing the costs of coexistence is something *it*
does, whereas increasing the reliability of its code is something *we*
can do. So the former is not an alternative to the latter.

Also, see the above. Maybe the exact cost of coexistence doesn't
matter much. The question is then whether it will value coexistence
at all. (Though as long as it is restricted to this planet and its
near surroundings, then the cost of keeping humans in physical
reality might amount to a large fraction of its capital.)

_____________________________________________________
Nicholas Bostrom
Department of Philosophy, Logic and Scientific Method
London School of Economics
n.bostrom@lse.ac.uk
http://www.hedweb.com/nickb
Received on Tue Jun 2 02:36:10 1998

This archive was generated by hypermail 2.1.8 : Tue Mar 07 2006 - 14:45:30 PST