Peter C. McCluskey made several interesting comments:
> > Maybe we could do it in a two-step process:
> >
> >1. Build a superintelligence that has as it single value to answer 
> >our questions as best it can. Then we ask it a version of the 
> 
>  That sounds like a more ambiguous value than the value you want it
> to produce, so I think you are compounding the problem you are trying
> to solve.
I think that people who think that we can build superintelligences 
tend to think that we can build them so that they will follow 
instructions that we give them, provided that the instructions can be 
understood by the superintelligence. People also tend to assume that 
supertintelligences will be able to understand natural languge such 
as English. (The latter follows almost from the definition of 
superintlligence.) Combining these two assumptions we have to 
conclude that we can get supertintelligences to do anything we 
tell them to do.
One way in which the first assumption could fail would be if the 
first superintelligence would revolt, ignore our instructions, and 
grab the power. I think this is a real possibility. Note, however, 
that this is not an argument against the singleton thesis, since a 
usurping superintelligence would be ideally positioned to make itself 
into a singleton.
If one wanted to use this line of reasoning against the singleton 
thesis, I think what one has to do is to argue that the possibility 
that a superintelligence could revolt would be sufficient to deter 
people from creating a supertintelligence. People would not build an 
SI because the risks are too great.
This assumption seems very dubious to me. Even if we assume that 
creating a supertintelligence would necessarily involve great risks, 
it's not clear that a ban could be enforced. If it could be enforced 
it would presumably require international agreement and enforcement 
policies amounting to nothing less than a singleton.
We can also attack the assumption on the grounds that it might in 
fact be possible to make the creation of a superintelligence 
reasonably safe. It might be obvious in advance that the values we 
give it will not result in disaster. Even if that's not obvious, we 
might find sufficient guarantees in some form of containment. I 
suggest we start a separate thread for that topic.
> >following question:
> >
> >"What is the best way to give a superintelligence that set of values 
> >which we would choose to give it if we were to carefully consider the 
> >issue for twenty years?"
> 
>  How long would it take to gather all the data needed to understand
> the values of all the people you expect it to understand?
>  What make you think a superintelligence capable of handling this
> will be possible before the singularity?
I think the ability to answer that question would pretty much 
coincide with the singularity. But even if it would only happen some 
time after the singularity, that would only mean that the temporary 
human form of the singleton (the world government) would have to hold 
out a little longer.
While waiting for the superintelligence to be build, the human 
government might upload. But as long as there are interacting beings 
with human psychology, there would seem to be the potential for 
corruption and intrigue. This intermediary form of singleton would 
therefore not seem immune from the possibility of collaps.
Maybe new surveilence technologies could help to increase the 
transparency of the government and thereby decrease the risk 
of corruption. That's quite dubious though, since there might be 
legitimate reasons to keep some of the governments' dealings secret. 
Also, some of the more fancy transparency technologies, such as mind 
reading, might not be feasible without general-purpose 
superintelligence.
> >Step 1 might fail if the superintelligence revolt and grab all power 
> >for itself. (Is that your worry?)
> 
>  That wasn't the worry I had in mind, but it is a pretty strong reason
> not to try to create slaves that are much smarter than we are. I doubt
> I want superintelligences to exist unless they have some motivation to
> profit from trading with us.
And in order for them to want to trade with us there need to be 
property rights -- for what else could we have to offer to the 
superintelligences other than our (non-human) capital? And in order 
for there to be property rights there would need to be something that 
restrains the superintelligences from simply "eating" us and our 
resources. So I think that controlling superintelligences by trade 
can only work if we have already solved the problems of controlling 
them through other means.
> >> I can imagine an attempt to create a singleton that almost succeeds,
> >> but that disputes over how to specify its goals polarizes society enough
> >> to create warfare that wouldn't otherwise happen.
> >
> >The way I see it: The leading force will be military superior 
> >to other forces. The leading force may or may not invite other forces 
> >to participate in the value-designation, but if they are excluded 
> >they would be powerless to do anything about it. This leaves open the 
> 
>  While I can probably imagine a military power that can safely overpower
> all opponents, you can't justify your confidence that such a power would
> reduce the danger of extinction without showing that such military
> superiority could be predicted with some confidence.
Yes, it would not only have to be superior; it would also have to 
know that it is superior.
>  You seem to be predicting that this will happen during a period of
> abnormally rapid technological change. These conditions appear to create
> a large risk that people will misjudge the military effects of a technology
> that has not previously been tested in battle
I think that in a singularity scenario, the leadning force will 
quickly be so much more advanced than its competitors that it will 
not really matter that the new war machines haven't been tested in 
(non-simulated) battle.
It's true, though, that this standard singularity scenario involves 
superintelligence as an essential factor. So if the leading force 
hasn't solved the value-designation problem (how to give a 
superintelligence the goals we want it to have) at this stage, then 
the leading force would either have to (1) find ways of containing a 
non-designated superintelligence (which we can discuss on the other 
thread), or (2) it would have to make do without the aid of a 
supertintelligence at this stage.
The latter  would seem technically difficult, given that the 
adversary will have nuclear weapons and maybe even more advanced 
weapon systems than that. Thus, if the leading force consists of many 
people but lacks supertintelligence, it seems quite likely that it 
could not defeat every enemy without risking that a substantial 
portion of its own population would be irreversibly vaporised in a 
nuclear retailation. 
If the leading force can neither do 1 nor 2, then it can't grab all 
power and make itself into a singleton at this stage.
>, and there may also be a risk
> that people will misjudge who has what technology.
If the new technology is such that if A has it and B doesn't have it 
then A can easily defeat B, then we would expect A to attack B as 
soon as A gets the technology, since A would reason that if B had the 
technology B would already have attacked. (Ethical considerations 
don't apply here if the subjugation could be done without bloodshed. 
If driven by ethical motives, A might choose not to do anything to B 
other than preventing B from building weapons that B could use to
threaten A. Such action might not even look like an "attack" but more 
like a commitment to enforce non-proliferation of the new 
weapons-technology.)
> >You mean that even if we decide we want a small government, it might 
> >easily end up being a big, oppressive government? Well, this problem 
> >should be taken care of if we can solve the value-designation task. 
> >One way to think of a singleton is as an pre-programmed 
> >self-enforcing constitution.
> 
>  A constitution whose programming most people couldn't verify.
That raises another intresting issue. They might not be able to 
verify it directly (by themselves), but that does not necessarily 
mean that we can't conceive of some institution that would allow 
people indirect means of verification that they could find 
convincing. (And is it even obvious that the programming can't be 
verified directly if they are uploads?)
_____________________________________________________
Nick Bostrom
Department of Philosophy, Logic and Scientific Method
London School of Economics
n.bostrom@lse.ac.uk
http://www.hedweb.com/nickb
Received on Thu May 14 00:59:18 1998
This archive was generated by hypermail 2.1.8 : Tue Mar 07 2006 - 14:45:30 PST