Re: poly: The singleton hypothesis

From: Robin Hanson <hanson@econ.berkeley.edu>
Date: Sat May 02 1998 - 20:27:19 PDT

Nick B. writes:
>> Relative to the costs of doing other useful things, what are the costs
>> of ensuring that all agents values and actions don't have consequences
>> which might threaten the totalitarian power?
>
>Suppose , just for the sake of argument, that the costs of this would
>be very high -- say the singleton has to spend 90% of its rescources
>on a secret police. Now, if the singleton were a dictator bent on
>survival, even such a cost might not deter him. ...
>... manufacture the AI:s with the appropriate goals built in. This
>would eliminate the need for mind-scanning. These AIs (which would be
>superintelligent) would organize the production within the singleton,
>and work on technological development. Humans could exist, as
>uploads, but would not have direct access to any mechanisms that
>could threaten the singleton.

The more totalitarian a singleton is likely to be, the harder people
would fight to prevent its appearance. You might be able to convince
people to fear the dangers of nanotech, nukes, or whatever enough to
support some form of limited world government. But you're talking
about a power so paranoid totalitarian that it would isolate all humans,
uploads, and any creature without simple loyal goals, in prison/work camps,
without any chance of access to important things, for fear they might
someday revolt. I can't imagine a less attractive option to advocate.
If they thought it a likely outcome, just about any likely dominant
human power would work very hard to avoid it. So I find it hard to
see how a human-run nanotech transition could create such a singleton
outside the most disastorous unforseen accident.
Received on Sun May 3 03:28:21 1998

This archive was generated by hypermail 2.1.8 : Tue Mar 07 2006 - 14:45:30 PST