Re: poly: The singleton hypothesis

From: Nick Bostrom <bostrom@ndirect.co.uk>
Date: Fri May 08 1998 - 20:50:09 PDT

Peter C. McCluskey wrote:

> carlf@alum.mit.edu (Carl Feynman) writes:
> >My intuition is that given trillions of Jupiter brains working for billions
> >of years, there will be no room left for technological or intellectual
> >improvements: the limits of feasibility will be coincident with the limits
> >of possibility. We will reach the highest possible level of development,
> >and so will any aliens we run into.
>
> This seems like a reasonable guess for brains that are unconstrained by
> goals orthogonal to survival. It's combining the requirement that the
> singleton have different goals that triggers my disbelief.

We can build a lot of Jupiter brains whose sole goal in life is to
make technological discoveries. The political thinking could be left
to other modules.

> bostrom@ndirect.co.uk ("Nick Bostrom") writes:
> >I think it implies that it is unlikely that it will make such a
> >simple mistake as giving itself goals that it does not want to have.

> It is very hard to create completely unambiguous specifications of goals.
> I think that if it is possible to create this singleton before widespread
> use of nanotech, it will just barely be possible. Adding the stipulation
> that the creators will also spend whatever time and thought is needed
> to design the perfect or a near perfect set of goals into it reduces
> the credibility of your hypothesis much further.

If initially we don't know how to reliably program a
superintelligence to have exactly the goals we want it to have, then
we might choose to put the decision making authority with a group of
people, perhaps a (world?) government, until we know how to transfer
the right values to a computer. During this intermediary
period there would be the usual possibility of intrigue and
corruption that could endanger the newborn singleton.

I'm not sure how hard or easy ot would be to make the necessary goal
specifications. Maybe we could do it in a two-step process:

1. Build a superintelligence that has as it single value to answer
our questions as best it can. Then we ask it a version of the
following question:

"What is the best way to give a superintelligence that set of values
which we would choose to give it if we were to carefully consider the
issue for twenty years?"

2. Follow the instructions obtained in 1.

Step 1 might fail if the superintelligence revolt and grab all power
for itself. (Is that your worry?) But if we can make a
superintelligence that just obeys our commands or tries to truthfully
to answer any question we ask it, then this proceedure would
seem to quickly give us the value-specifications we want. (The basic
point is that since we are dealing with superintelligences, it
suffices if we specify the goals on a very high, maybe even
meta-level.)

> I can imagine an attempt to create a singleton that almost succeeds,
> but that disputes over how to specify its goals polarizes society enough
> to create warfare that wouldn't otherwise happen.

The way I see it: The leading force will be military superior
to other forces. The leading force may or may not invite other forces
to participate in the value-designation, but if they are excluded
they would be powerless to do anything about it. This leaves open the
possibility that there might be internal strife within the leading
power.

Many people will think that for both ethical and practical reasons,
we should advocate a set of meta-values that give great autonomy to
individual values. Very roughly speaking: let the singleton make sure
that we don't do something too destructive to each other but
otherwise give each a territory that they can do what they want with.
Maybe the stake in the singleton should be proportional to the
capital that each member (in the leading power and the groups it
choose to include, hopefully all of humankind) owns. Some fraction of
the singleton's resources could be used to create a welfare system,
or alternatively people could make donations for this purpose (a very
small fraction would go a long way). There would also have to be
rules made that people can only create new sentience if they are
prepared to give their creation some part of their own capital. Etc.
There are clearly many things to be argued out. How likely it is that
these issues can be settled in a peaceful way depends on the
character of the leading force: for example, if it's a democratic
body then its members would simply vote on the alternatives.

> Another typical claim made by the "trust the government" advocates.
> "The income tax will never be raised to 10%." "Social security numbers
> will never be used as a national id system."
> If neither the singleton nor its components had any self-preservation
> instinct, I might be willing to believe it.

You mean that even if we decide we want a small government, it might
easily end up being a big, oppressive government? Well, this problem
should be taken care of if we can solve the value-designation task.
One way to think of a singleton is as an pre-programmed
self-enforcing constitution.

_____________________________________________________
Nick Bostrom
Department of Philosophy, Logic and Scientific Method
London School of Economics
n.bostrom@lse.ac.uk
http://www.hedweb.com/nickb
Received on Sat May 9 03:18:17 1998

This archive was generated by hypermail 2.1.8 : Tue Mar 07 2006 - 14:45:30 PST