Re: poly: The singleton hypothesis

From: Nick Bostrom <>
Date: Thu Jun 04 1998 - 12:44:01 PDT

Peter C. McCluskey writes:

> >Since the open research is open, the military research could hardly
> >fall behind -- they too can read all the papers that the open
> >community publishes.
> If they had identical needs, this would make some sense, although
> even then there is often some delay between when the research results

I agree that there could be some delay. However, the more their
needs diverge, the greater the chance that the military labs would be
leading in the specific areas they are focusing on.

> When the military has some needs that are different from those of
> the open researchers, it may also have the problem that the best researchers
> may prefer working in open, peace-oriented labs.

A *good* singleton would be the ultimate peace- and openness-utopia.
Failure to support the research in the military labs of their own
nation could lead to some evil nation getting there first. Thus
conscientious scientists should rally to the military labs (at least
in the nations that are considered to be good by their citizens).
This is what happened with the Manhattan project, which was supported
by many of the leading physicists of the Allied nations, plus some
defectors from Germany.

The singleton thesis is not a doomsday scenario. A singleton of
the right sort would be as close to utopia as one could get.

> I agree that military labs will have technological edges in some
> specialized areas. I don't see these specialized improvements giving
> decisive advantage over what competing military organizations will
> be able to create by copying and enhancing upon the open research.

Suppose you have an assembler, and that there are a number of
nanomachines around that perform various tasks such
as cleaning up dioxin in the lakes. Then all that a military lab
would have to do would be to design a variant of this nanomachine
that, say, replicate and, with delayed action, eat enemy steel
structures. Then they've won.

Rival military organizations would have been able to build a similar
nanobot, but they were too late. (And the very fact that they could
have built them is an important part of the reason why the leading
force will take steps to make sure that they don't.)

> To return to your claim that the first military to create an important
> breakthrough would use it immediately to conquer the world, it assumes
> that no preparation is needed after the research lab makes the breakthrough.
> This is inconsistent with all historical examples I can think of.

Methodological remark: I wonder if that is the reason why people
fail to realize certain consequences in this domain -- they keep
thinking in terms of historical examples. I instead tend to think in
terms of what will be technologically possible, and what a
cost-benefit analysis implies that rational agents would do in these
completely novel situations.

> For instance, the design breakthrough that lead to the atom bomb
> didn't include ways to instantly produce them fast enough to guarantee
> that they would be decisive

True. However, with general assemblers you get exponential growth and
can hence manufacture almost arbitrary amounts of nanites or devices
in a very short time, at least if you have stockpiled the raw

>Carl's example of mass-produced missiles requires some
> time to install and target them (the targetting would probably require
> somewhat different strategies than are in use for scarcer missiles),
> plus separate work on missile defense.

As far as I understand it, Carl's example was only intended to set a
lower bound on what will be possible. He mentioned that it might need
a middle-sized country, and that the people could be trained in
advance. Missile defenses will presumably be developed before
molecular nanotechnology.

Even this lower bound thus seems sufficient. Even if the preliminary
preparations were very slow, this might not preven the leading force
from carrying it off, unless other nations were willing to nuke it
just because of it's research and preparation programs.

Another alternative is that the leading force would use "programmable
germs", black goo, rather than mass manufacture of bulk-technology
weapons. The production and lauch step could then occur overnight.
Not even to mention what would be possible if superintelligence is

-- I have begun writing a paper where I will try to put my ideas
on this topic together. The discussions on this list have been --and
are-- very helpful indeed.

Nicholas Bostrom
Department of Philosophy, Logic and Scientific Method
London School of Economics
Received on Thu Jun 4 18:57:44 1998

This archive was generated by hypermail 2.1.8 : Tue Mar 07 2006 - 14:45:30 PST