CurtAdams wrote:
> >I think it is definitely
> >possible to build an artificial intellect that is smarter than
> >ourselves,
>
> But we'll never have built anything anywhere *near* that intelligent.
You mean that we haven't done that today? (true) Or that we will not
have done it at the time when the singleton is to be constructed?
(might or might not be true -- we don't know whether
superintelligence or nanotech will come first; but either will very
probably (IMO) follow the other).
Note that a singleton does not presuppose supertintelligence.
However, the kind of permanent singleton that I envision will
probably not be built until there is superintelligence. A singleton
without superintelligence and nanotechnology will probably look
pretty much like a world government as ordinily conceived.
> >and whether we will be able to give it the values we
> >choose is an open question.
>
> Past attempts to control the values of complex constructed systems
> have been utter failures. Check any government program.
As I've said before, I don't think historical analogies are of any
use in this case, since the relevant technological conditions will be
totally different. Mind-scans, motivation-centre design etc...
(Beside, it's not true that all attempts to manipulate the values of
complex systems have been utter failures. Without communist
propaganda the Soviet Union would surely not have lasted nearly as
long as it did. The burgeoning advertising industry does also have
something to do with controlling people's preferences.)
> Heck,
> even computer programs aren't as well controlled as you require.
How selfish and wicked they are!
> >Do you think the two step proceedure I outlined can't possibly work?
>
> It will fail. There is no way you will be able to control the thing,
> if it has complexity even begining to appropach what it would need.
Do you have an argument for that assertion?
> >So we better make that group inclusive enough that it contains
> >ourselves and everybody we care for or think are ethically justified
> >to have a say.
>
> In that case it includes virtually anybody, and the singleton is
> basically the current situation. Only humans have significant
> influence on what's going on, so society as a whole is a singleton.
If the UN had complete power over all governments, the there would be
a kind of singleton. But in the present world that is not the case,
and there is no singleton.
(It would be good if I (or somebody else) could sharpen the
definition of a singleton.)
> >We also better have institutions, proceedures, checks
> >and balances that enable the public to have some degree of confidence
> >that the actual computer programmers and their bosses don't give the
> >singleton other values or functions than the ones that society has
> >decided it should have.
>
> No singleton could be flawlessly designed.
It does not have to be. We could make it in such a way that it would
continually seek its own improvemnt, strive towards perfection. We
would only have to make it *good enough* to give it this drive. (This
is probably something I should put more emphasis on when I write up.)
> That would require
> a designer far more sophisticated than the singleton
I don't know of any general rule that says that you can't
flawlessly design something that is more sophisticated than yourself.
For, example, it's quite routine that parents "design" a child that
is smarter than themselves. (But is their offspring flawless though?
-- What does it mean to produce a "flawless" human?)
>, yet the
> singleton would have to be able to run both its designer and
> the rest of society, and thus be more complex than them.
> Paradox time.
I see no paradox here. Why would you have to be more complex than
something in order to run it?
> >> I think we'd have better chances with the grey goo.
>
> >Yes, grey goo is the easy problem. The difficulty is to avoid
> >deliberately designed black goo.
>
> Black goo is almost unimaginably easier to design than any singleton,
Well, that remains to be seen. I mean, political difficulties aside,
we already know exactly how to design a singleton: empower the UN.
That might not be the best possible singleton, but my point is that
even though it would be politically difficult in today's
international climate, there is no technological obstacles to prevent
the formation of a singleton right here and now. But there is a
possiblity that these political difficulties may prove harder to
overcome that the technological problem of how to construct black
goo. In that case, we have to hope that the leading force, which
first develops nanotechnology, is as inclusive, causious and
benevolent as possible.
_____________________________________________________
Nick Bostrom
Department of Philosophy, Logic and Scientific Method
London School of Economics
n.bostrom@lse.ac.uk
http://www.hedweb.com/nickb
Received on Mon May 11 03:38:43 1998
This archive was generated by hypermail 2.1.8 : Tue Mar 07 2006 - 14:45:30 PST