Re: poly: Software for superintelligence

From: Hal Finney <>
Date: Wed Jan 07 1998 - 19:19:53 PST

Nick Bostrom, <>, writes:
> Given sufficient hardware and the right sort of programming, we could
> make the machines learn in the same way as child does, i.e. by
> interacting with human adults and other objects in the environment.
> There are well known methods, such as the Backpropagation algorithm,
> that can achieve good results in many small applications involving
> multi-layer neural networks.

It will probably be necessary to understand how learning occurs in the
brain in order to expect a neural network actually to evolve intelligence.
This would be part of the advancement in neuroscience which is necessary
before the project could expect success.

> Developing an adequate initial network structure poses a more serious
> problem. It might turn out to be necessary to do a considerable amount
> of hand-coding in order to get the cortical architecture right. In
> biological organisms, the brain does not start out at birth as a
> homogenous tabula rasa; it has an initial structure that is coded
> genetically.

You are probably right that this is the hardest part of the problem.

> Neuroscience cannot, at its present stage, say exactly
> what this structure is or how much of it needs to be preserved in a
> simulation that is eventually to match the cognitive competencies of a
> human adult. One way for it to be unexpectedly difficult to achieve
> human-level AI through the neural network approach would be if it
> turned out that the human brain relies on a colossal amount of genetic
> hardwiring, so that each cognitive function depends on a unique and
> hopelessly complicated inborn architecture, generated over the aeons
> in the evolutionary learning process of our species.

I think this is a strawman. The problem could be extremely difficult
without their being a "colossal" amount of genetic structure. It only
takes a few bits of information before trial-and-error becomes hopelessly
inadequate as a strategy for finding the right structure. I think we are
going to have to do a lot of work to understand the cortical architecture
in much more detail, as well as how neurons learn.

> [...]
> These three considerations support the view that the amount of
> neuroscientific information that is needed for the bottom-up approach
> to succeed is fairly limited.

That may be the case, but "fairly limited" still leaves the possibility
that it will take an enormous amount of work to determine what is actually
going on in the brain.

> Further advances in neuroscience are needed before we can construct a
> human-level (or even higher animal-level) artificial intelligence by
> means of this radically bottom-up approach. While it is true that
> neuroscience has advanced very rapidly in recent years, it is
> difficult to estimate how long it will take before enough is known
> about the brain's neuronal architecture and its learning algorithms to
> make it possible to replicate these in a computer of sufficient
> computational power. A wild guess: something like fifteen years.

This sounds early to me.

It seems to me that our understanding of how the brain works is still
in the rudimentary stages. Nitric oxide was only recently discovered
to play an important role in long term potentiation, which is probably
fundamental to learning, and this implies that LTP occurs on a much
larger scale than was previously thought. You may take this as evidence
for great progress, but to me it looks like great ignorance.

I also question the whole idea of trying to train a cortex-like simulated
neural network into intelligence. Dogs and other animals can't become
intelligent even with lots of training, yet they have cortexes. Some
humans are said to have very small cortexes but can still function at a
reasonably normal level (not sure how reliable these reports are though).
Some cetaceans have larger cortexes than people but never learn to read
or do arithmetic.

To me this suggests that human-style intelligence requires more than a
"generic cortex-like tissue" which is what it sounds like you might be
proposing to simulate. I agree that your examples show that there is
flexibility within the cortex, but there still must be something specific
to human brains.

> The estimate might seem to some to underestimate the difficulties, and
> perhaps it does. But consider how much has happened in the past
> fifteen years. The discipline of computational neuroscience did hardly
> even exist back in 1982. And future progress will occur not only
> because research with today's instrumentation will continue to produce
> illuminating findings, but also because new experimental tools and
> techniques become available. Large-scale multi-electrode recordings
> should be feasible within the near future. Neuro/chip interfaces are
> in development. More powerful hardware is being made available to
> neuroscientists to do computation-intensive simulations.

What I as a layman find frustrating about biology (and most sciences for
that matter) is how indirect all the studies have to be. Naively you'd
think that if they want to know what neurotransmitters exist, they
could just stick probes up next to the neurons and see what chemicals
are ejected when they fire. But in practice this does not work; our
probes are too clumsy and our analytical techniques too crude.

I have a theory that neurons change their shapes drastically over
their lifetimes, extending new processes, coming into contact with other
neurons, even to a limited extent crawling through the brain, and that
this neural mobility plays an important role in the brain. But the
tools we have today are unsuited to studying this kind of phenomenon.
Once you freeze and section a brain and put it under the EM it's a little
late to watch the neurons wiggle.

It's very difficult to investigate the behavior of the brain without
damaging it. I'm sure you're right that new measurement techniques
will play a crucial role, but I don't know if there is anything that
will make a big difference in the next 15 years.

Received on Thu Jan 8 03:40:11 1998

This archive was generated by hypermail 2.1.8 : Tue Mar 07 2006 - 14:45:29 PST