At 7:46 PM -0700 12/21/97, Perry E. Metzger wrote:
>Tim May writes:
>> There are factors which I think are plausible for why continued expansion
>> of the "Jupiter-sized brain" (to borrow a phrase popular when I was on the
>> Extropians list several years ago) will be a higher priority than sending
>> out replicator/Berserker von Neumann probes:
>
>A society doesn't have just one set of goals. Individuals have many
>diverse goals. I suspect that the difficulty of sending out von
>Neumann machines in bulk is so low in a post-nanotechnological world
>that the odds of none of the billions to trillions of intelligent
>entities in the solar system over the next millenium trying it are
>vanishingly small. I mean, at some point, this is going to practically
>become a high school science project. What are the odds of *no one*
>trying it? What are the odds that there are a lot of technological
>civilizations out there and no individual member of any of them
>decides to try it?
The "someone will try it" argument is fairly persuasive...up to a point. I
don't believe von Neumann machines are nearly as easy to construct (or
model) as many do. I suspect that by the time they are buildable, other
forces and considerations will have emerged.
(Perhaps morosely, I think it *likelier* that a man-made biological virus
will destroy all human life long before a realistic von Neumann probe is
ever built. The knowledge to build such a virus (or whatever) is almost
here now, where nanotech and replicators are many, many decades off.)
Speculatively, I do tend to favor the "AI Singularity" model (despite my
skepticism about Singularitarian thinking in general!). While homo sapiens
will not be wiped out, I can see the centroid of
knowledge/research/economic power shifting to an nth genaration AI.
(Vinge's "Powers" or the similar AIs of Dan Simmon, David Zindell, and
others.)
Thus, I believe that in, say, 300 years, when the technology/knowledge to
build von Neumann probes exists, the AIs (or _the_ AI) will be deciding. I
could be wrong, but I'm skeptical that rogue launchers will be common.
(Though a rebuttal of this is that we are looking at cosmological times,
and something is sure to "eventually leak out," even if such AIs controlled
things. My rebuttal to this would be that even such leaking out, or rogue,
launches could be countered by more advanced probes to destroy them.)
>> 4. The logical complexity in a relativistic von Neumann probe (such as
>> described by Tipler in the last Appendix to "Immortality," or such as
>> Drexler and several others have described) is likely to be trivial compared
>> to the complexity of the launching system.
>
>Dunno. Bussard ramjet style probes would probably be fairly trivial to
>construct -- or especially to self-contstruct.
A la the "Tau Zero" ships...but they could still only carry a tiny subset
of the capabilities of the AI.
>> And sending out "dumb" von Neumann machines AS SOON AS ONE IS ABLE TO is
>> stupefyingly dumb.
>
>How would you stop it? Remember, "civilization" does nothing --
Like I said, recall probes and immune systems are likely to be well-developed.
I certainly do agree, by the way, that the absence of any evidence of such
probes gives us some hints as to scarce intelligence may be.
--Tim May
The Feds have shown their hand: they want a ban on domestic cryptography
---------:---------:---------:---------:---------:---------:---------:----
Timothy C. May | Crypto Anarchy: encryption, digital money,
ComSec 3DES: 408-728-0152 | anonymous networks, digital pseudonyms, zero
W.A.S.T.E.: Corralitos, CA | knowledge, reputations, information markets,
Higher Power: 2^2,976,221 | black markets, collapse of governments.
"National borders aren't even speed bumps on the information superhighway."
Received on Mon Dec 22 03:00:47 1997
This archive was generated by hypermail 2.1.8 : Tue Mar 07 2006 - 14:45:29 PST