bostrom@ndirect.co.uk ("Nick Bostrom") writes:
>"Peter C. McCluskey" <pcm@rahul.net> writes:
>
>> An AI would probably have rights comparable to those of a human, which
>> would provide a strong deterrent to further investment by a company whose
>> software was close to whatever threshold would give it rights.
>
>If the rights start to obtain only when the AI reaches
>human-level intelligence, then I don't think that this would be a
>significant deterrent; for the benefits of creating a an AI that is
>human-equivalent or slightly above human are so great that they would
>be worth almost any amount of legal complications.
>
>However, might it be possible to limit the AI in such a way that it
>doesn have consciousness or count as a person (and hence doesn't have
>rights) while retaining most of the benefits?
If there were a clear consensus behind a readily observable threshold,
there would be a clear incentive to develop AI to just below that threshold.
I see no hint of agreement about the basic theory, and most people seem to
be leaning towards criteria like consciousness which are hard to observe.
> If the AI were
>domain-specific, and only did exactly what it was told, and did not
>have any ability for self-reflection or long-term planning, would
>that save us from having to give it person-status?
I have doubts about whether software that met these restrictions would
be sufficiently powerful to contribute much to an economic singularity.
-- ------------------------------------------------------------------------ Peter McCluskey | Critmail (http://crit.org/critmail.html): http://www.rahul.net/pcm | Accept nothing less to archive your mailing listReceived on Fri Apr 24 17:01:12 1998
This archive was generated by hypermail 2.1.8 : Tue Mar 07 2006 - 14:45:30 PST