Re: poly: The singleton hypothesis

From: Nick Bostrom <bostrom@ndirect.co.uk>
Date: Tue May 19 1998 - 19:00:39 PDT

Peter C. McCluskey wrote:

> bostrom@ndirect.co.uk ("Nick Bostrom") writes:
> >I think that in a singularity scenario, the leadning force will
> >quickly be so much more advanced than its competitors that it will
> >not really matter that the new war machines haven't been tested in
> >(non-simulated) battle.

> Do you have some reason to believe this? It's sufficiently different
> from what military history suggests that it sounds like wishfull thinking.

The reason is that in the singularity scenario things are postulated
to happen extremely fast. Loosly speaking, the pace of development
"goes to infinity". That means that even if the leading power is only
one year ahead of the competition, it will achieve an enormous
technological advantage. It will have assemblers and
superintelligence when it's competitors lack these technologies. With
superintelligence, development speeds up even more. Even plain
uploaded engineers, running at a million times their biological
clock-rate, would give R&D a big boost.

> >>, and there may also be a risk
> >> that people will misjudge who has what technology.
> >
> >If the new technology is such that if A has it and B doesn't have it
> >then A can easily defeat B, then we would expect A to attack B as
> >soon as A gets the technology, since A would reason that if B had the
> >technology B would already have attacked. (Ethical considerations
>
> I don't expect a policy of initiating force whenever possible to
> become widespread in the forseeable future, so I doubt that the reasoning
> you expect will become widespread.

My argument presupposed that the agents in question will act
rationally. I explain the fact that the policy of attacking
whenever possible is not widespread in today's world by noting that
such a policy is not rational today.

The rationality assumption might fail, though I think the probability
of this will decrease with the availablity of advice from
superinelligences.

> >don't apply here if the subjugation could be done without bloodshed.
> >If driven by ethical motives, A might choose not to do anything to B
> >other than preventing B from building weapons that B could use to
> >threaten A. Such action might not even look like an "attack" but more
> >like a commitment to enforce non-proliferation of the new
> >weapons-technology.)
>
> There are lots of people who will violently resist conquest. If your
> singleton were merely enforcing a ban on something few people wanted
> anyway (such as germ warfare), it might look peacefull to most. But your
> goal of controlling interstellar appears to require substantial restrictions
> on travel (can't let them out of the region that the singleton controls
> until they've been properly programmed)

If the region that the singleton controls grows at lear the speed of
light, as I think it will, then I don't see how this would lead to
substantial restrictions on travel.

>. Why would you expect this to
> be more peacefull than the Berlin wall?

You can't but your head against a wall that is receeding with the
speed of light. But even if the singleton decided to confine
dissenters to the solar system (say) I still think there wouldn't be
significant bloodshed, since only an ethical singleton would tolerate
dissenters in the first place, and an ethical singleton would
presumably find it unethical to allow the dissenters to hurt
themselves too badly in their futile attempts to oppose it.

> >> >One way to think of a singleton is as an pre-programmed
> >> >self-enforcing constitution.
> >>
> >> A constitution whose programming most people couldn't verify.
> >
> >That raises another intresting issue. They might not be able to
> >verify it directly (by themselves), but that does not necessarily
> >mean that we can't conceive of some institution that would allow
> >people indirect means of verification that they could find
>
> I can certainly conceive of such an institution. Most existing
> institutions that I would trust to do that kind of verification
> get to be trustworthy by avoiding controversial power grabs. Most
> institutions that grab the power needed to accomplish what you want
> don't deserve to be trusted.

Such institutions could still be useful. Say the UN is a singleton
and it wants to commission a superintelligence that can see to that
its constitution is not violated. Various groups design different
such systems. Then the trustworthy institutions are called upon to
verify that the proposed designs would function as stated. Only if
these institutions say the design is honest will the UN
members allow the project to proceed.
_____________________________________________________
Nicholas Bostrom
Department of Philosophy, Logic and Scientific Method
London School of Economics
n.bostrom@lse.ac.uk
http://www.hedweb.com/nickb
Received on Wed May 20 14:22:28 1998

This archive was generated by hypermail 2.1.8 : Tue Mar 07 2006 - 14:45:30 PST