2005 Nov 3 original version

[Also see Eliezer Yudkowsky's analysis of different 'schools']

There's something of a Singularity backlash these days, at least memetically. To pick a prominent example on rec.arts.sf.written, we have James Nicoll, who asks for optimistic hard SF set in the near future, in space, no catastrophes killing off most of humanity, and *no Singularity*.

What exactly does he mean by that? I could ask him, but it's more fun to go through some possibilities first, since in my experience "the Singularity" has become slippery, with many possible (if related) meanings.

To go from the most extreme to the most sensible root: first we have the "Technorapture", where something -- intelligence, connections, technology -- increases exponentially over the course of a few hours, culminating in a mass upload of everybody on Earth and possibly their subsequent disappearance from the physical universe. The core Vingean example is presumed in _Marooned in Realtime_ (where the Extinction/disappearance was a *plot device*, lest we forget.) Preceding that as an image was _Childhood's End_, but psionics and the Singularity don't mix well. Explosive transcensions also happened in _A Fire Upon the Deep_, though at least the Powers stayed in our universe. Extreme Drexlerian nanotech tends to be involved, to do the uploading and provide computational substrate.

Saying you don't want this is perfectly reasonable to me; Drexlerian nanotech is something to be skeptical of, as are the claims to extreme speed and universality of the process, and we can forget about the mystical disappearance.

(Though in fairness I'll note that the concept of making "basement universes" to disappear into has some attention from physicists. But mass nanotech is not obviously a tool for quickly making black holes, which would be needed.)

(I'll also note that while Ken MacLeod is sometimes quoted for his "The Rapture for Nerds!" line in _The Cassini Division, the nerds in the book turned out to be *right*. They had their Rapture, and they were conscious, not zombie programs.)

Dropping down a level, "post-Singularity" has been applied to various settings, not always with the superhuman intelligence, but with cool technology which some might think makes storytelling challenging, generally nanotech and AI, with the key functions being the copyability of almost everything, including human minds. "Post-scarcity" and immortality ("post-mortality", perhaps?) Examples: MacLeod's _The Sky Road_, where a near future with nanotech immortality pills and some modestly transhuman minds was internally called "post-Singularity". Vinge's "Just Peace" had mental copying; his "Original Sin" had immortality and weird probability tech; MacLeod's _Newton's Wake_ had another Rapture but also various normal humans running around with backups. Wil McCarthy's Collapsium books seem post-Singularity in this sense. Also Cory Doctorow's _Down and Out in the Magic Kingdom_. Some readers seem to be getting tired of all this, so "no Singularity" seems interpretable as "no nanotech! No copies, backups, or uploads! No biological immortality!" Again, there's grounds for being skeptical of the nanotech, though my impression is that if other ways were found of doing the same things the readers would still be unhappy.

Dropping further, we get closer to what I think of as Vinge's original ideas. One form is that when we learn to increase our intelligence, those smarter results will be able to increase their own even faster, in an exponential growth for at least a while, with accompanying exponential growth in other technologies. The Technorapture is just an extreme manifestation of the process; the post-scarcity, post-mortality technologies are sometimes a spinoff, sometimes an enabler (that level of nanotech being considered useful for the mental technologies.) Skepticism: since the smarter beings are probably even more complex, they may need all their enhanced brains to accomplish a similar increase. Whether a continued linear increase in intelligence is really less worldshaking (especially for the SF writer) than an exponential one is a question I leave to the reader.

Which brings us to the root of it all in John Campbell's rejection of Vinge's sequel to "Bookworm, Run!", with a human enhanced like the chimp of the original story: "You can't tell this story, and neither can anyone else." This has cousins in Niven's own writings on superintelligent beings; the basic idea is that you can't plausibly write about someone much smarter than yourself, let alone a society of them. Sure, they might still have human motivations, but can you portray their thoughts, their actions, the technologies or social arrangements they would produce? Can you understand them? -- Which actually leads me to a common conflation: difficulty of *prediction* is not the same as impossibility of *understanding*; we might well be able to understand a posthuman world (even if not quickly enough to keep up) without being able to create one. Loose analogies fly around at this point; some say "they'll be to us as we are to dogs", I invoke Turing-completeness and say dogs just aren't that good at understanding each other, in the sense we mean it. It's not that we're too complex for dogs, but dogs aren't complex enough to understand anything.

But anyway, at this point the cry of "no more Singularity" sounds like "don't show enhanced intelligence!" which I think gets to be a problem, if we're pretending to be talking about hard SF. As Vinge wrote in his classic essay, there are multiple routes to enhancing intelligence. Developing human level AI and making it better is just one of them, not even that frequent in Vinge's writing. A second was IA, "intelligence augmentation", itself with multiple forms; one was computer prosthetics to human brains, as shown in _The Peace War_ and _Marooned in Realtime_ and "True Names". A third was connecting human minds together, mentioned in _Marooned in Realtime_. Usually as direct neural links, but I note that if we take "mind as society of agents" seriously, that leads to "society of humans as a crude mental process", and perhaps such thinking would lead to better organizations of corporations and governments than the recursive primate hierarchies we tend to use.

The final approach was genetic enhancement, and I think skepticism must here run aground. Even if you don't believe in AI, for philosophical or practical reasons, even if plugging network jacks into the brain doesn't seem all that transformative, even if simple chemical interventions don't help much and nanotech rewiring is impossible, there are still the facts that the brain is a biological machine constructed by our genes (with, yes, environmental influence), that IQ varies, and seems to vary largely with genes (80% hereditable -- and if that's high, it would make IQ easier to control, not harder!) Twiddle with the right genes and you can get results, even if you don't know exactly how it works. (Vingean result: _Tatja Grimm's World_)

We're not far from it now. Find correlations of genes with IQ and personality traits, create multiple embryos and select the most preferred gene combination among them, and you can create quite strong selection pressures in the direction you want; this could well be next-decade tech. (Fiction: the movie "Gattaca", though I think they selected among 8 eggs, while I could see a whole ovary being used.) Learn how the homeobox and development genes work and you can try tricks such as creating bigger brains, and bigger brain-body ratios, and perhaps controlling sizes of areas within the brain. Mice with bigger brains have already been created (though the article didn't mention whether they seemed any smarter.) This is a bit further off (need to learn more, and test it on animals) but possibly not that far; there are practical and ethical problems to worry about (physiological side effects of a large head; necessity for C-sections; possibility of this causing speciation and lack of breeding partners), but there's also the temptation of making 300+ IQs.

At this level, what I see the Singularity really saying is "our tools don't only work out there, or in our bodies for simple healing. Our bodies and brains are machines we can take apart, understand, improve, and perhaps copy; this will happen, this will happen soon, and this will have consequences." This is what unifies the nanotech immortality pill with the brain-computer interface or the gene-selected supermind or the upload, namely the treatment of the human condition in all its aspects as a controllable material process. The future doesn't just offer more food, water, energy, and neat clothes; it offers people who are, as a result of their parents' will if not their own, smarter and saner and stronger and healthier and more beautiful than we are. At the very least.

So, please, specify which Singularity it is that you're tired of!


Back to me.