Here is a section from my paper "How long before superintelligence?"
(http://www.hedweb.com/nickb/superintelligence.htm). It might not be
the right time for a bulky post on a new thread, since there's
already so many interesting things going on here, but I'm
kinda impatient for people's comments.
Software via the bottom-up approach:
Superintelligence requires software as well as hardware. There are
several approaches to the software problem, varying in the amount of
top-down direction they require. At the one extreme we have systems
like CYC, which is a very large encyclopedia-like knowledge-base and
inference-engine. It has been spoon-fed facts, rules of thumb and
heuristics for over a decade by a team of human knowledge enterers.
While systems like CYC might be good for certain practical tasks, this
hardly seems like an approach that will convince AI-skeptics that
superintelligence might well happen in the foreseeable future. We have
to look at paradigms that require less human input, ones that make
more use of bottom-up methods.
Given sufficient hardware and the right sort of programming, we could
make the machines learn in the same way as child does, i.e. by
interacting with human adults and other objects in the environment.
There are well known methods, such as the Backpropagation algorithm,
that can achieve good results in many small applications involving
multi-layer neural networks. Unfortunately this algorithm doesn't
scale well. The Hebbian learning rule, on the other hand, is perfectly
scaleable (it scales linearly, since each weight update only involves
looking at the activity of two nodes, independently of the size of the
network). It is known to be a major mode of learning in the brain. It
appears likely, though, that the Hebbian rule is not the only learning
mode operating in the brain. We would also need to consider, for
instance, reward-induced learning (Morillo 1992) and other learning
modes yet to be discovered. Moreover, it is not known how purely
Hebbian learning could allow the brain to store structured
representations in long-term memory, although several mechanisms have
been proposed (Bostrom 1996).
Creating superintelligence through imitating the functioning of the
human brain requires two more things in addition to appropriate
learning rules (and sufficiently powerful hardware): it requires
having an adequate initial architecture and providing a rich flux of
sensory input.
The latter prerequisite is easily provided even by present technology.
Using video cameras, microphones and tactile sensors, it is possible
to ensure a steady flow of real-world information to the artificial
neural network. An interactive element could be arranged by connecting
the system to robot limbs and a speaker.
Developing an adequate initial network structure poses a more serious
problem. It might turn out to be necessary to do a considerable amount
of hand-coding in order to get the cortical architecture right. In
biological organisms, the brain does not start out at birth as a
homogenous tabula rasa; it has an initial structure that is coded
genetically. Neuroscience cannot, at its present stage, say exactly
what this structure is or how much of it needs to be preserved in a
simulation that is eventually to match the cognitive competencies of a
human adult. One way for it to be unexpectedly difficult to achieve
human-level AI through the neural network approach would be if it
turned out that the human brain relies on a colossal amount of genetic
hardwiring, so that each cognitive function depends on a unique and
hopelessly complicated inborn architecture, generated over the aeons
in the evolutionary learning process of our species.
Is this the case? There are at least three general considerations that
suggest otherwise. We have to contend ourselves with a very brief
review of these considerations here. For a more comprehensive
discussion, the reader may consult Phillips & Singer (1996).
First, consider the plasticity of the neocortex, especially in
infants. It is known that cortical lesions, even sizeable ones, can
often be compensated for if they occur at an early age. Other cortical
areas take over the functions that would normally have been developed
in the destroyed region. In one study, for example, sensitivity to
visual features was developed in the auditory cortex of neonatal
ferrets, after that region's normal auditory input channel had been
replaced by visual projections (Sur et al. 1988). Similarly, it has
been shown that the visual cortex can take over functions normally
served by the somatosensory cortex (Schlaggar & O'Leary 1991). A
recent experiment (Cohen et al. 1997) showed that people who have been
blind from an early age can use their visual cortex to process tactile
stimulation when reading Braille.
It is true that there are some more primitive regions of the brain
whose functions cannot be taken over by any other area. For example,
people who have their hippocampus removed lose the ability to learn
new episodic or semantic facts. But the neocortex tends to be highly
plastic, and that is where most of the high-level processing is
executed that makes us intellectually superior to other animals. It
would be interesting to examine in more detail to what extent this
holds true for all of neocortex. Are there small neocortical regions
such that, if excised at birth, the subject will never obtain certain
high-level competencies, not even to a limited degree?
The second consideration that seems to indicate that innate
architectural differentiation plays a relatively small part in
accounting for the performance of the mature brain is the that as far
as we know, the neocortical architecture in humans, and especially in
infants, is remarkably homogeneous over different cortical regions and
even over species:
"Laminations and vertical connections between lamina are hallmarks of
all cortical systems, the morphological and physiological
characteristics of cortical neurons are equivalent in different
species, as are the kinds of synaptic interactions involving cortical
neurons. This similarity in the organization of the cerebral cortex
extends even to the specific details of cortical circuitry. (White
1989, p. 179)."
The third consideration is an evolutionary argument. The growth of
neocortex that allowed Homo Sapiens to intellectually outclass other
animals took place under a relatively brief period of time. This means
that evolutionary learning cannot have embedded very much information
in these additional cortical structures that give us our intellectual
edge. This extra cortical development must rather be the result of
changes to a limited number of genes, regulating a limited number of
cortical parameters.
These three considerations support the view that the amount of
neuroscientific information that is needed for the bottom-up approach
to succeed is fairly limited. (None of these considerations is an
argument against modularization of adult human brains. They only
indicate that a considerable part of the information that goes into
the modularization results from self-organization and perceptual input
rather than from an immensely complicated genetic look-up table.)
Further advances in neuroscience are needed before we can construct a
human-level (or even higher animal-level) artificial intelligence by
means of this radically bottom-up approach. While it is true that
neuroscience has advanced very rapidly in recent years, it is
difficult to estimate how long it will take before enough is known
about the brain's neuronal architecture and its learning algorithms to
make it possible to replicate these in a computer of sufficient
computational power. A wild guess: something like fifteen years. This
is not a prediction about how far we are from a complete understanding
of all important phenomena in the brain. The estimate refers to the
time when we might be expected to know enough about the basic
principles of how the brain works to be able to implement these
computational paradigms on a computer, without necessarily modelling
the brain in any biologically realistic way.
The estimate might seem to some to underestimate the difficulties, and
perhaps it does. But consider how much has happened in the past
fifteen years. The discipline of computational neuroscience did hardly
even exist back in 1982. And future progress will occur not only
because research with today's instrumentation will continue to produce
illuminating findings, but also because new experimental tools and
techniques become available. Large-scale multi-electrode recordings
should be feasible within the near future. Neuro/chip interfaces are
in development. More powerful hardware is being made available to
neuroscientists to do computation-intensive simulations.
Neuropharmacologists design drugs with higher specificity, allowing
researches to selectively target given receptor subtypes. Present
scanning techniques are improved and new ones are under development.
The list could be continued. All these innovations will give
neuroscientists very powerful new tools that will facilitate their
research.
________________________________________________
Nick Bostrom
http://www.hedweb.com/nickb
Received on Wed Jan 7 23:44:14 1998
This archive was generated by hypermail 2.1.8 : Tue Mar 07 2006 - 14:45:29 PST