Re: poly: Dumbing down AIs (was: Modeling Economic Singularities

From: Robin Hanson <hanson@econ.berkeley.edu>
Date: Sun Apr 26 1998 - 14:07:00 PDT

Nick B. writes:
>> >However, might it be possible to limit the AI in such a way that it
>> >doesn have consciousness or count as a person (and hence doesn't have
>> >rights) while retaining most of the benefits? ...
>> Imagine that the U.S. still had slavery, and instead of abolishing
>> slavery, someone proposed that we genetically modify slave babies so
>> that they met your criteria.
>
>The relevant analogy seems rather to be if somebody proposed to
>genetically modify zygotes, not babies.

That is what I had in mind (sorry for the imprecision).

>I don't think it is ethically wrong to modify human zygotes in such a
>way that they grow up to be brainless organ-banks.
>
>>... You've in effect killed the slaves,
>> but kept their functioning bodies, and called this an improvement.
>
>You haven't killed any slaves; you've prevented them from being born.
>Whether this is an improvement depends on what you put in their place.

I don't see an important difference between killing and preventing from
living. And you did say what would be in their place: conscious AIs.
I'm saying I think it is unethical to replace conscious AIs with
unconsious ones, just for the purpose of avoiding having to give them
"rights".

>... Of course, there might be good practical reasons for
>limiting population growth, but in general I would say: the more, the
>better (even if it would somewhat lower the average quality of life).

Why not apply this logic to AIs?

Robin Hanson
hanson@econ.berkeley.edu http://hanson.berkeley.edu/
RWJF Health Policy Scholar, Sch. of Public Health 510-643-1884
140 Warren Hall, UC Berkeley, CA 94720-7360 FAX: 510-643-8614
Received on Sun Apr 26 21:13:12 1998

This archive was generated by hypermail 2.1.8 : Tue Mar 07 2006 - 14:45:30 PST