Singularity Summit 2012
[Video]
For more transcripts, videos and audio of Singularity Summit talks visit intelligence.org/singularitysummit. Original url
Speaker: Vernor Vinge
Transcriber(s): Ethan Dickinson and Jeremy Miller
Moderator: Vernor Vinge is next. He is a retired San Diego State professor of Mathematics, a computer scientist, and an award-winning science fiction author, perhaps best known for his Hugo Award-winning novels and novellas, including "A Fire Upon the Deep," and "A Deepness in the Sky." His well-known and often-cited 1993 essay, "The Coming Technological Singularity," argues that the creation of superhuman artificial intelligence will make the end of the human era, and that our existing models and theories will prove insufficient to make any further predictions.
Please join me in welcoming our next speaker, Vernor Vinge.
[applause]
Vernor Vinge: Thank you. I think the years we're coming up on – the crazy teens of the 21st century – are an extraordinarily exciting time. After all, the centerpiece, or at least half of a good reason for dating the singularity is the Moore's law estimate combined with estimates of the computational competence of the hardware that we carry around in our heads. Both of those numbers are highly dubious, and the whole uncertainty is compounded by the fact that we don't have good notions of what the necessary software solutions are. But still, we are coming up on that era where, for the first time, by optimistic measures, sufficient hardware may be around. That is a reason to expect that at least we'll have lots of fun surprises in the teens.
One thing that I think is very important, no matter what one's belief about the future is, is the idea of thinking in terms of scenarios. Not one scenario, that's very important, but in lots of different scenarios, and scenarios that are at the extremes of what you can bring yourself to believe could possibly happen.
For instance, although I consider the singularity to be the most likely non-catastrophic scenario for the next few decades, I've actually had a lot of fun thinking about non-catastrophic scenarios where the singularity does not happen. If you google on "what if the singularity doesn't happen," I think I may come in first on a Google search. I gave a talk I'm very proud of at The Long Now Foundation a few years ago, and that sort of thing is very much worth thinking about.
Within the scope of singularity scenarios, I think there is also room for a substantial variety of outcomes and methods. In fact I think the range of possible ways of getting the singularity is quite large, and that it's not sufficiently looked at, and that some of the paths to the singularity, although they all are involving technology and actually all involve computation, usually at very intense levels, besides not being thought of by a lot of people, there are a lot of people working to make these scenarios happen that may not even have heard of the term "technological singularity," or even have a notion of the idea of it.
What I'd like to do is go down through four of the five possibilities that I think are obvious for getting us to a greater than human intelligence.
The first on my list is just classical artificial intelligence. For instance the project that Ray [Kurzweil] outlined falls under that category. That general area is going very well in my opinion, and as I mentioned a minute ago, we're looking at hardware that should become more and more congenial in this coming decade to such projects. I really expect and look forward to Watson-like events addressing more and more components of what it is to be a person.
The debate over these issues is going to become more and more intense, and during this time, as we fall toward the singularity, I think we're going to actually be exposed to more and more technical ideas as well as philosophical ideas, that come close to being something new under the sun. Two of the talks earlier today, Jaan Tallinn and Robin Hanson, both explored situations that are seriously strange. They're situations that have been talked about before, although probably no human ever talked about them before 1970.
They reveal something that I think is going to be happening more and more. That is, when I used to talk about the singularity, one feature that did not entirely sit well with some people that I was talking to was my assertion that the rise of superhuman intelligence was not predictable, but that's something you can say about the future anyway. Not only that, but it was unintelligible, even if by some magical means you could get a friend to go forward and report back to you. I still think that. If you look back at Jaan Tallinn's talk and Robin Hanson's talk, they both show that this is an era and a possibility of technology where you're very clearly not in Kansas anymore.
I think there's also some things about the pure AI situation that may be surprising in a more limited direction. That is, if you watch demos of robotics and think about those demos, they've become progressively more impressive. One of my great joys is watching YouTube demos of what robots can do. I'm progressively more impressed. There is a situation though, where there's a lot of things about a YouTube demo of robotics that leads one to wonder exactly what the conditions are. I think those should be put out very transparently.
They also illustrate another thing, there may be some features we had of minimum acceptable solutions that were more than is necessary. I don't know how many of you have noticed this, but if you're the type of inventor, or engineer, like some of my friends, who has been working for years on something that nobody believed could be done, and you were arguing fervently that it could be done – I'm not talking about something as big as the singularity, I'm just talking about ordinary revolutionary developments.
[laughter]
Vernor: You have these knowledgeable opponents who keep telling you it can't be done, and you keep saying it can be done, and then when you finally do it, one reaction that you will get in addition to – Clarke has a sequence of responses that the establishment has to surprising advances, but one thing that's not in his list is people who will look at what you've done, see how you've actually proved that you have done it, and then they'll stand back and they'll say, "Yeah, but that's cheating."
[laughter]
Vernor: In other words, there has been some thing on their list of the reasons why it couldn't happen that was in fact a logical reason why it couldn't happen, and unfortunately for them, and unfortunately for the whole world that had to wait so long for the advance, it was an objection that did not apply to the critical thing that was involved in the advance that you have just achieved. One thing to do is look at what you're doing, whether it's in this area or in others, and keep in mind what is the irreducible outcome that is being looked for, and then go back to the claims that what you're doing is a perpetual motion machine, or what you're doing violates thermodynamics or Shannon's law or whatever, and see if there isn't something about the problem that really renders that issue not untrue, but not quite relevant to the way you intend to solve the problem.
One particular example of that that happens with robotics issues is a prejudice that I think I've had for my whole life, and that I think is shared by most people here, and that is that we really deprecate robotics solutions that happen in pre-prepared environments. By "pre-prepared environment," I mean that the robot for instance has cooperating sites in the room, or it at least has some sort of edge detectors that have been mounted in the room, things that give it clues about the nature of objects. Taking advantage of things like that by a program, those artificial aspects of the environment can really trivialize problems that are very, very complicated.
My classic example of that is that I figured that a robot that could clean a bachelor's unprepped bathroom –
[laughter]
Vernor: – would be something that would be very close to satisfying the singularity.
[laughter]
Vernor: I still sort of think that. However I'm not that much against the notion of prepped environments anymore. The reason is, I started looking at myself and other living things in the environment. We actually benefit from hundreds of millions of years of co-evolution by an enormous amount of pre-preparation in the environment. If you really got rid of all the other life cues in the environment, we would ourselves have a very difficult time performing. Since it's pretty evident that in most urban areas, we're naturally going to get heavy machine prepping, I think it's both philosophically and for practical purposes not quite so important to deprecate prepped environments.
I think classical artificial intelligence research is progressing, but I think there are other things, as I said, some of them not even recognized. One is what is often called intelligence amplification. That's where the humans are still at the center of things, but as David Grinn puts it, the machines and the computers provide a neocortex for the human, even if it may not be Borg-like attached to the top of your head. In that environment, as David Grinn puts it, the machines provide the computational and data-processing horsepower, and the humans provide what we've always been best at, and that is wanting things.
[laughter]
Vernor: Another way of looking at intelligence amplification is that it's a case of extreme UIs. Lots of people are working on UIs. Let me make up a definition for an extreme UI. An extreme UI is one that has a form factor not that different from an unaided human, and the feature that the user interface supplies has the latency and the convenience of our normal cognitive faculties. For instance right now I could look like an expert about cinema, as long as you couldn't see or hear me clacking away on my keyboard. An extreme UI would give you that capability where I could come back with answers to you that would be roughly as fast as a person could remember things.
For years, I've been wanting to see chess tournaments that weren't between computers, and weren't purely between people, but which allowed any combination of the two to enter the tournament. Actually, that has come to pass. I'd like to see that in general with things that involve people. In some cases it would be outright contests, in some cases it might be X Prize type events, where the idea is you want a user interface that is not merely convenient, but that is as convenient as the cognitive feature that the support is providing.
That's a beginning step toward the ultimate dream of intelligence amplification, and that is that the humans themselves become that greater-than-human intelligence that's part of the definition of the technological singularity.
Back in 1982, when I was on a panel and suggested the term singularity, I was out in Pittsburgh on a river boat with Hans Moravec , and I think it was after this panel where I had made this suggestion about the terminology. He said, "You know Vernor, I don't mind this term 'singularity,' that sounds like an OK term, and I understand why, the unknowability that you're talking about, but frankly, if you enhanced your own intelligence in sync with the improvements with the hardware, you would ride the curve of improved cognition. It would be a smooth thing, as it transcended human-level intelligence, it would be a smooth thing for the participant. That participant would understand what's going on without any catastrophic, tortured feelings of unintelligibility." Then he looked at me and he said, "And I intend to ride that curve."
I think the intelligence amplification angle is going fine. Some of these different paths to the singularity illustrate, as much as anything, how strange this world could be, and why my claim of unintelligibility has merit. I think that's best illustrated by the third item on my list, which is digital gaia. That is simply the ensemble of the world's networked embedded microprocessors.
Imagine if all of our artifacts had microprocessors in them – we're about five percent of the way there, actually, I think. There's a great profit to that, in making the world more convenient. These devices are not especially smart, but they know what they are, they know where they are, they can talk to their nearest neighbors, a few meters away, and then by extension, by networking, they presumably, if necessary, could talk to any such node in the connected network. Also, they would tend to have certain various suites of sensors and even effectors. In addition to the embedded systems, in places where there were not a lot of natural artifacts there would probably be lots of free floating systems that would have those capabilities.
Imagine this world, with these devices. In a sense you could say that once this was in place, reality itself would wake up. That is strange. I think that reveals why the term "digital gaia" is actually an appropriate term. Fundamental change in the nature of reality, really. In many ways, that's a very good sort of thing, a very powerful sort of thing, but think about this.
In our world, we have a model of reality. For instance, I'm holding this ballpoint pen. If I let this pen go, we all have a very good idea of what's going to happen next, although I gave this talk once at a place where the floor was a little bit sloping, and everybody got a great laugh when the pen ended up over there. But still, we have very good ideas of what's going to happen in terms of the reality that we have here. Those doors at the end of the room, they may be locked at the end of this talk, or they may be locked right now, but you know that those doors are still there.
Imagine a world like the digital gaia world that I was just talking about. In that world, I have the conviction and the fear that it would have all the firm respectable stability that we currently associate with financial markets.
[laughter]
Vernor: That actually is a reason, first of all, for the world being very, very strange. You start gaming that world and you have a rather unusual situation. It's also pretty evidently a pretty dangerous world. Thinking about that then both illustrates the alienness aspect that I was talking about a minute ago, and the possible danger aspect.
The last item on my list is another one of the paths to the singularity that is going very, very well, and involves people. You notice actually, to varying degrees, humans are not at the center of the things that I was talking about on my list. We would like this all to be about humans, but it's not necessarily the case. The last item that I'm going to talk about here on my list though is something that is intensely related to people, and I think deserves to be on this list, and I will argue for that. That is, the Internet plus the databases and computers that are on the Internet, together with the algorithms that they can run now and reasonably can run in the future, plus millions and hundreds of millions of humans.
I used to turn my nose up at the notion of crowdsourcing, and all those humans, because basically, when it comes to cognition, biology doesn't have legs. However, we have seven billion machines that can pass the Turing Test out there right now. That's an installed base. It has intellectual power if the people involved, those seven billion, are comfortable enough to participate and have the ability to participate and interface. If they do, and actually hundreds of millions of them do already, that is an intellectual institution that trumps all human intellectual institutions of the past. That's a very important point, and I invite you, if you haven't already, to do a little survey-type research of the range of things that are being done with crowdsourcing.
I gave a taxonomical talk about this at Singularity University this summer. They were great, they treated me super, the audience was super, and I really felt it was a homerun talk, and that's probably on the Singularity University site. The variety of different things that can be done is really extraordinary.
Some of them, like Folded for protein folding, turns out involves a relatively small number of humans, but the combination of the network and the interaction between the players identifies particular players that, even though they may not have biological training, they have extraordinary ability to see how proteins will fold, better than trained molecular biologists.
That's one extreme. The other extreme so far has happened much more often, and that is projects that were involving pattern-recognition over very large data sets. The astronomy people were among the first who did this, I think. In those cases you get also very strange results, like projects that were thought that if they went well, they would take a year or so to complete their analysis of the database, and the analysis was done 100 hours later. 100 hours after they opened the doors to people, what they thought would take a year to do had been done.
That sort of thing indicates to me that there is a very large amount of potential for working on the software for particular goals. It's not just classical crowdsourcing, but there's all sorts of things that can be done. They can take different cross-sections of cognition and can use them to empower this resource that we have in the people of the world.
This also, I think, perhaps most important, getting back to the title of my talk, "Who's Afraid of First Movers?" Perhaps that depends on who the first mover is. We have some evidence that first movers in the past, in the biological world, have done terrible things to their potential competitors. I don't think that's the way it's going to work out this time around.
For one thing, I think the last possibility that I described, there's a good chance that it will be the first mover, humans plus the Internet. Good chance that will be the first mover. One thing about us humans is that we can't resist tinkering with our machines. Basically, I don't think this is a first mover that is going to wipe out the machines, however, since it's profoundly human, it is a first mover that gives us a very good chance to be in a position to safely guide the other developments that happen.
I think the outcome is going to consist of some wildly different forms of cognition, what the science fiction writer Carl Schrader calls "different species of mind," and that those differences are going to be astounding and will be very helpful to the entire kingdom of life that I see coming at us in the 21st century.
I think I'll stop there, thank you.
[applause]
[Q&A begins]
Man 1: Thanks. Great talk Vernor, again. I would like to have a deeper sense of your last point. If I understand you correctly, you say the group mind, the crowdsourcing aspect will mitigate the different risks in the different other scenarios, artificial intelligence, intelligence amplification, digital gaia? Can you give us a deeper sense of how you see this in practice, how the crowd might mitigate those different risks? Thank you.
Vernor: Actually, I have a temptation to just opt-out of answering by pointing out that one reason we need something as smart and multiferous as the group mind in order to make that possible. There's a range of threats, actually, that the Singularity Institute is dedicated to thinking about, and the group mind provides, I think, a tool for looking at more possibilities, and for looking at mitigating methods with regard to the different things that could go wrong. Currently the one that scares me the most is the digital gaia one. In view of the fact that I don't have any opinion about how to cure the financial markets, I think I have even less opinion of how we can grapple with digital gaia.
[next question]
Man 2: I'm curious of your thoughts on the whole-brain emulation or upload scenario. Do you consider that to be part of intelligence amplification, or implausible, or just not plausibly the first mover?
Vernor: Uploads that are done by some sort of “perfect”, “perfect” in quotes, scan of the brain, it seems to me depend on breakthroughs that are really radically different from anything that we can point our finger at. On the other hand, the idea of making a device that could say, really really pass an industrial-strength version of the Turing Test, OK. If you had something like that, then it seems to me that fooling it into thinking it was the upload of a real person, actually would be quite a tractable problem.
I think that if we get AGIs, we will immediately get something that says it's an upload. If it has been stocked with enough biographical and personal information about the individual that it claims to have been uploaded from, it probably could convince a lot of humans. It's sort of a meta version on top of the Turing Test. Given that, I suspect that something like uploads will be a very big deal in the case that we have AGI on machines. From that, something like what Robin was talking about, although I think he's more intended the scanning method, but something like thatm it seems to me, could well be one of the things that's going on and happening with everything else.
Moderator: Vernor Vinge, Hugo Award winner. Thank you very much.
[applause]