Re: poly: Controlling an SI?

From: Nick Bostrom <bostrom@ndirect.co.uk>
Date: Tue Jun 09 1998 - 20:14:33 PDT

Peter C. McCluskey wrote:

> bostrom@ndirect.co.uk ("Nick Bostrom") writes:
> >Why do you think that a legal system that included the extra clause
> >(saying that human-level intelligences lack don't have rights) would
> >be less economically efficient than one which didn't? Consider:
> >(1) Once the humans are annihilated, nobody needs to think about the
> >law any longer.
>
> That ignores the costs of changing moral systems (see below), and assumes
> that no digital agents of human-level intelligence will be valuable
> afterwards.

I don't see how it assumes that.

> Where will humans whose minds include some digital enhancements fit
> into your predictions?

My primary prediction is that there will be a singleton, perhaps a
dominating superintelligence, which might create other
superintelligences and agents but only ones whose values are
compatible with the singleton's.

What we are discussing here is what would happen if we assume that at
some point there are both humans, AIs, uploads, superintelligences
etc. I guessed (I'm less certain about this) that the
superintelligences, if they didn't like the humans, would simply kill
them and take their resources, especially if the humans owned a large
amount of resources. The superintelligences might draw a somewhat
arbitrary line somewhere between humans and smaller AI:s. The wider
the variety of intermediary life-forms, the greater the chances seem
that the whole thing would be stable and that a revolution could be
avoided.

> I don't think such a line needs to be drawn. I think that any entity
> which claims to have purchased, homesteaded, or have been given a piece
> of property should have that claim evaluated without regard to the nature
> of the entity making the claim.

What about legal incapacity? What about a parrot that has learnt to
say "This is mine."?

> >(3) Maybe superintelligences can manage their capital more
> >efficiently than somewhat irrational humans.
>
> I assume so, and don't see it's relevance.

This would seem to justify the superintelligences taking over in the
name of economic efficiency.

> Are you asking what happens if humans no longer have
> property rights? About the difference between direct and indirect control?

About what happens if the humans claim to have property rights but
have no ways of enforcing them without the voluntary cooperation of
the superintelligences.

> not just the immediate effects of applying
> the rule, but such things as whether it is stable. For instance, with
> the IQ-based rule that you seem to hint at, it would seem wise to ask
> whether there would be a slippery slope over which the majority with
> the highest 75% of IQs at any one time would keep redefining the threshold
> upwards to steal from those with the lowest IQs.

Yes. One solution is if the colluding superintelligenes modify and
check each others' motivations so that all can verify that nobody
wants to repeat the redefinition.

Failing a such a proceedure, I wonder if there are not some idealized
conditions under which equilibrium would be obtained only after
plotting has eliminated all but one player (not necessarily the
strongest or most intelligent)?

> Both sides can influence the cost (e.g. by whether they claim resources
> that haven't been homesteaded yet).

Yes. However, if the superintelligences are considering action
against the group of all humans, then there might be a free-rider
problem (you want other humans to resign their property claims, but
you yourself don't want to surrender yours). Also, humans have a
certain minimum resource requirement that they need to exist at all.
_____________________________________________________
Nicholas Bostrom
Department of Philosophy, Logic and Scientific Method
London School of Economics
n.bostrom@lse.ac.uk
http://www.hedweb.com/nickb
Received on Wed Jun 10 02:36:10 1998

This archive was generated by hypermail 2.1.8 : Tue Mar 07 2006 - 14:45:30 PST