Re: poly: Modeling Economic Singularities

From: Nick Bostrom <bostrom@ndirect.co.uk>
Date: Thu Apr 23 1998 - 18:53:02 PDT

"Peter C. McCluskey" <pcm@rahul.net> writes:

> An AI would probably have rights comparable to those of a human, which
> would provide a strong deterrent to further investment by a company whose
> software was close to whatever threshold would give it rights.

If the rights start to obtain only when the AI reaches
human-level intelligence, then I don't think that this would be a
significant deterrent; for the benefits of creating a an AI that is
human-equivalent or slightly above human are so great that they would
be worth almost any amount of legal complications.

However, might it be possible to limit the AI in such a way that it
doesn have consciousness or count as a person (and hence doesn't have
rights) while retaining most of the benefits? If the AI were
domain-specific, and only did exactly what it was told, and did not
have any ability for self-reflection or long-term planning, would
that save us from having to give it person-status?
_____________________________________________________
Nick Bostrom
Department of Philosophy, Logic and Scientific Method
London School of Economics
n.bostrom@lse.ac.uk
http://www.hedweb.com/nickb
Received on Fri Apr 24 01:01:06 1998

This archive was generated by hypermail 2.1.8 : Tue Mar 07 2006 - 14:45:30 PST