Re: poly: The singleton hypothesis

From: Nick Bostrom <bostrom@ndirect.co.uk>
Date: Sat May 02 1998 - 11:07:58 PDT

Robin Hanson writes:

> Nick B. writes:
> >> There is a vast difference between a single world government and a
> >> such a totalitarian power. Even when it in principle possible to read
> >> and modify brains, it will be at first very expensive.
> >
> >Do you agree that this technology could be a strongly stabilizing
> >factor once it's cheap? And how could it be very expensive if it
> >requires nanotechnology? If you have one brain-scan machine, you can
> >use nanotech to cheeply give you (almost) any number you want.
>
> It's just wrong to think that nanotech makes everything too cheap
> too worry about costs.

Yes, definitely.

> *Relative* costs are the ones that will matter.
> Relative to the costs of doing other useful things, what are the costs
> of ensuring that all agents values and actions don't have consequences
> which might threaten the totalitarian power?

Suppose , just for the sake of argument, that the costs of this would
be very high -- say the singleton has to spend 90% of its rescources
on a secret police. Now, if the singleton were a dictator bent on
survival, even such a cost might not deter him. He would have to
build fewer palaces, but if that's necessary to survive, it's a price
he would most likely pay.

Even the Soviet Union, which didn't have any mind-scanners or
physiological value-manipulators, might have been quite stable if it
hadn't faced external competition. The sort of stability that I claim
it would be possible for a singleton to have, however, is of a
different order altogether. We're not talking about a few hundred
years, but "forever" or at least until it encounters aliens. Thus I
can't seek support in historical analogies. But neither, I think, can
people of the opposite opinion. For instance, I don't think the
comparison with black slavery is relevant for several reasons, one of
which is that slave-owners didn't have advanced mind-control
techniques.

As for the relative cost of brain-scans, I don't think it's a
critical parameter, because I think the number of brains that need to
be scanned will be very limited. Say ten billion or whatever. I think
most new minds will be AIs and maybe uploads. These could be
constructed in such a way, I think, that their goal modules could
easily be read and, if necessary, modified.

Here is one slightly more fleshed out possibility (not the
best one):

Assuming that AIs can be built ab initio (i.e. don't require
uploads), which I think is very probable, then it should be possible
to manufacture the AI:s with the appropriate goals built in. This
would eliminate the need for mind-scanning. These AIs (which would be
superintelligent) would organize the production within the singleton,
and work on technological development. Humans could exist, as
uploads, but would not have direct access to any mechanisms that
could threaten the singleton. They might be allowed to do what they
want in their simulated worlds. (If the singleton is of the
dissolving type, they could be released and empowered when it was
time for the singleton to dissolve.)

How can AIs be designed with buit in goals? There are surely better
ways, but one method would be to generate a number of AIs with
different goals; then let them act in a simulated environment in
morally challenging situations (without knowing that it was a
simulation); then select the ones that behave virtuosly; copy them
and put them in control.

>What if their planned acts
> threaten the power even if their values don't?

With superintelligence, multiple back-up systems, and a great deal of
care, I think it would be possible for the singleton to avoid dying
through somebody making a mistake.
 
> You might be able to design agents from the ground up in such a way that
> it was cheap to monitor their loyalty to the totalitarian power. But
> the cost of destroying all other agents would seem an enormous loss.

As for the ethical cost, the above scenario avoids it since the other
agents are not destroyed. This particular scenario does sacrifice
some of their skills (those that cannot be safely transferred to the
superintelligences). It might not take long for a society of
superintelligences (running perhaps millions of times faster than
humans) to redevelop those skills.

_____________________________________________________
Nick Bostrom
Department of Philosophy, Logic and Scientific Method
London School of Economics
n.bostrom@lse.ac.uk
http://www.hedweb.com/nickb
Received on Sat May 2 17:21:32 1998

This archive was generated by hypermail 2.1.8 : Tue Mar 07 2006 - 14:45:30 PST