poly: Are von Neumann machines inevitable, or even likely?

From: Tim May <tcmay@got.net>
Date: Sun Dec 21 1997 - 17:35:57 PST

At 5:27 PM -0700 12/21/97, Alexander 'Sasha' Chislenko wrote:
>It seems that most scenarios for the civilizations' expansion assume that
>their power and spread will be shaped more by their spacial expansion than
>growth in complexity, power over the laws of physics, ability to create more
>space/time where they are, etc. Of course, extensive factors are easier to
>model and extrapolate, but their role seems to be smaller and smaller even
>in recent human history, and will probably continue to diminish. So I do not
>seriously consider any expansion scenarios based on currently known - or
>currently existing - laws of physics and technology.

I think Sasha has it exactly right.

In the kind of language I think in, an advanced civilization will begin to
build ever-smarter machines (not a new observation, of course) long before
it is capable of building effective interstellar von Neumann probes of more
than uninteresting complexity.

There are factors which I think are plausible for why continued expansion
of the "Jupiter-sized brain" (to borrow a phrase popular when I was on the
Extropians list several years ago) will be a higher priority than sending
out replicator/Berserker von Neumann probes:

1. The cyberspace world will likely be more interesting than the
"dustspace" world.

(A situation I think we already see, with the human/cultural/mental world
being vastly more interesting, to most of us, than interstellar space is.
This is a controversial statement, and I am willing to discuss it in more
detail. I think it's more than just an esthetic judgment of what is
interesting...I think there are "logical depth" (the term used by Charles
Bennett in his wonderful essay in "The Turning Machine" collection) reasons
to see why this is so.)

2. Speed of light issues are very real for large computers. Obviously. They
already are for supercomputers.

At the "conversion of the solar system to computational resources" level,
the problem will be severe. Certainly building additional computer
resources in distant places, and then facing huge computational delays in
communicating with them, is problematic.

(Mightn't a better strategy be to go out and build matter launchers to
bring the matter back to the origin star system? Or to accelerate the
original system (somehow) out to where more mass exists? Or, speculating,
to go back and recycle all of the old circuits to a newer technology, and
so on for millions of years...?)

(If this "centralize, centralize, centralize!!!" model has any validity, we
might look for "voids." However, looking out amongst the galaxies, as
toward the Great Attractor, we are looking a billion or more years back in
time....obvious calculations about lightcones here, of course.)

3. Logical depth, again. It seems to me that a massive computer, or ecology
of millions of computers, will be very interested in improving programming,
in "thinking about" deep issues, and so on.

(What such a machine might think about is of course squarely on the other
side of the AI Singularity, so even speculating about it is pretty
pointless. I won't get into the common speculation that maybe such a
Jupiter-sized brain will decide "Why bother?" about the issue of
colonizing, but this is a train of thought to consider. It might be good
for at least a long delay in launching probes, in at least a goodly
fraction of all advanced civilizations.)

4. The logical complexity in a relativistic von Neumann probe (such as
described by Tipler in the last Appendix to "Immortality," or such as
Drexler and several others have described) is likely to be trivial compared
to the complexity of the launching system. (Which is why the option of
moving the entire system,however slowly, may be preferable.)

5. Such an advanced civilization may not expand very far (because it may
not _want_ to send out fairly dumb probes, for example), but it will surely
have very advanced defense measures. A dumb von Neumann probe will not just
be able to land on "Trantor" (so to speak) and begin mining it for raw
materials.

(Very speculatively, we may expect a much more complex ecology of defenses
and antibodies. I am not persuaded that the "maximal expansion at c minus
epsilon" that Carl Feynman and Ralph Merkle have championed so eloquently
is a rich enough model. As others here on Polymath have also noted, we have
seen some cases of a monoculture spreading memes around the world almost
unstoppably (American/European culture and values, for example, in the last
century). but many other memes/genes have _not_ spread unstoppably. Anyway,
this is whole other discussion.)

A more detailed model of expansion of replicators/Berserkers would include
defensive measures by intelligences which had NOT elected to expand.

These would act as sinks or voids, or even "backwaves." (An intelligence
which elected not to expand and then found evidence of von Neumann machines
reaching it might then decide to do something...another discussion.)

6. The "Other" Problem. Why would any intelligence in its right mind send
out instructions which might result in an Other being built which could
come back and destroy it?

(Again, many issues here. Why would anyone in his right mind have children,
who can vie for leadership and even kill the parent? Consult the Greek
tragedies and other sources. Indeed, in many biosystems the children
displace the parents. For entities which can anticipate the future (as
beetles and monkeys cannot, but as humans and AIs can), this is a very real
issue. This gets into the teleology of evolution, but I take a Dawkinsesque
view of what works is what gets replicated...)

And sending out "dumb" von Neumann machines AS SOON AS ONE IS ABLE TO is
stupefyingly dumb. A civilization will think twice before launching a meme
set which will "crystallize" the universe (over cosmological times) if the
predictions of some are realized. This line of reasoning leads to:

7. The Great Silence = The Great Procrastination? Advanced intelligences
may keep delaying the sending out of von Neumann replicators while the
implications are pondered, while simulations are run, etc.

In summary, though I think intelligence is very sparse in the universe, for
AP (Anthropic Principle) reasons (Barrow and Tipler), if there are other
intelligences out there, some will have reached the level we have (about
half, goes the reasoning). They will then reach the point of building AIs,
probably long before deciding to launch VN machines....then the effects I
describe may occur.

Wildly speculating, there may have been civilizations which rushed to
launch von Neumann machines, at "c minus epsilon." and then thought better
of it and launched "recall probes" (using later propulsion technology, at
"c minus half epsilon," and so on.

Perhaps the Jupiter-sized AI will conclude that sending out dumb von
Neumann probes was one of the dumber things its predecessors did....

Enough for now. I grant you that my analysis is not based on differential
equations, on cellular automata simulations, or on other such rigorous
models. But I'm not convinced that such rigorous models are necessarily
more correct than looser arguments, for the usual reasons about assumptions
and extrapolations.

(Expansion of a bacterial culture or fungus in a culture dish is easy to
model with growth equations...but it's only a part of how such things work
in a larger ecosystem.)

In a certain sense, there is a compelling logic to the simple equations
that say a simple meme set that destroys all it meets and builds more of
itself and spreads as quickly as it can will "crystallize" the
universe...but this equation ignores so much. It ignores that such simple
probes will likely be easy to destroy as they approach advanced
civilizations. It ignores that only a dumb intelligence would be
_interested_ in crystallizing the universe this way. And so on.

--Tim May

The Feds have shown their hand: they want a ban on domestic cryptography
---------:---------:---------:---------:---------:---------:---------:----
Timothy C. May | Crypto Anarchy: encryption, digital money,
ComSec 3DES: 408-728-0152 | anonymous networks, digital pseudonyms, zero
W.A.S.T.E.: Corralitos, CA | knowledge, reputations, information markets,
Higher Power: 2^2,976,221 | black markets, collapse of governments.
"National borders aren't even speed bumps on the information superhighway."
Received on Mon Dec 22 02:26:30 1997

This archive was generated by hypermail 2.1.8 : Tue Mar 07 2006 - 14:45:29 PST