Re: poly: Software for superintelligence

From: carl feynman <>
Date: Thu Jan 08 1998 - 07:08:33 PST

At 11:38 PM 1/7/98 +0000, you wrote:
>Here is a section from my paper "How long before superintelligence?"
>( It might not be
>the right time for a bulky post on a new thread, since there's
>already so many interesting things going on here, but I'm
>kinda impatient for people's comments.

It's a nifty message. Glad to see it has nothing to do with evil robots
taking over the universe.

You say:

>One way for it to be unexpectedly difficult to achieve
>human-level AI through the neural network approach would be if it
>turned out that the human brain relies on a colossal amount of genetic
>hardwiring, so that each cognitive function depends on a unique and
>hopelessly complicated inborn architecture, generated over the aeons
>in the evolutionary learning process of our species.
I suspect that you are implicitly assuming a certain model for how the
brain develops, which would be complex, and then arguing that the brain
does not work that way, so it cannot be complex. Let me suggest another
model which would be consistent wiith your three observations, but would
still be very complex, in the sense that you worry about in the
above-quoted paragraph.

Your (straw man) model seems to be something like "Prenatal development
causes the creation of a complex architecture of various cell types
organized into subnetworks, each of which turns into a useful functional
area when exposed to stimuli after birth." Let's call this model A. In
another model, which I will call model B, the large-scale structure of the
cortex is fairly homogenous, but there are lots of possible types each cell
can be recruited into depending on signals it recieves from its
environment. The cell types are regulated by mutually stimulating and
inhibiting sets of regulatory proteins, much like the reguatory cascades
that decide whether a fetal cell will be a liver cell or a spleen cell.
Depending on type, a particular cortical cell will have different
thresholds and electrical behavior, grow synapses onto other cells in
different patterns, display different recognition proteins on its surface,
and perhaps even release and respond to different neurotransmitters. In
order to make this model falsifiable (:-), here's two things neurons can't
do: grow new long-range (more than 1 mm) connections, and change from one
cell type to another, except along a one-way branching path.

Let's look at your three points:

>First, consider the plasticity of the neocortex, especially in
>infants. It is known that cortical lesions, even sizeable ones, can
>often be compensated for if they occur at an early age. Other cortical
>areas take over the functions that would normally have been developed
>in the destroyed region.

Model B explains this by saying that recruitment to various cell types
happens largely during infant development. Cells are recruited from a
less-differentiated pool into more differentiated states.

>The second consideration that seems to indicate that innate
>architectural differentiation plays a relatively small part in
>accounting for the performance of the mature brain is the that as far
>as we know, the neocortical architecture in humans, and especially in
>infants, is remarkably homogeneous over different cortical regions and
>even over species:

This is explained just fine by model B. In fact, the less-differentiated
cortex of infants is exactly what the model would predict.

>"Laminations and vertical connections between lamina are hallmarks of
>all cortical systems, the morphological and physiological
>characteristics of cortical neurons are equivalent in different
>species, as are the kinds of synaptic interactions involving cortical
>neurons. This similarity in the organization of the cerebral cortex
>extends even to the specific details of cortical circuitry. (White
>1989, p. 179)."

I'd like to know what the author considers the "details of cortical
circuitry". As far as I know, the wiring diagram of any particular
cortical column is not known to any degree of detail. If, for example, one
column contained rings of six neurons, each neuron of which inhibited its
neighbors, and another contained similar rings of five neurons, we would
not yet be able to tell the difference. However, these rings are
functionally immensely different: one is a flip-flop and the other is an
oscillator. I may be wrong about this; perhaps someone who has advanced
professional Swedish knowledge of neurophysiology can comment.

>The third consideration is an evolutionary argument. The growth of
>neocortex that allowed Homo Sapiens to intellectually outclass other
>animals took place under a relatively brief period of time. This means
>that evolutionary learning cannot have embedded very much information
>in these additional cortical structures that give us our intellectual
>edge. This extra cortical development must rather be the result of
>changes to a limited number of genes, regulating a limited number of
>cortical parameters.

It doesn't seem to me that this speaks either for or against model A or B.
Nor does it indicate that simulating the brain is simple. We have to
simulate all the parts that are present in chimps, too! This includes
manual dexterity and vision that are no worse than human, the ability to
imitate others, and near-human levels of Machiavellian social interaction.

Or perhaps you are saying that once we get human-level AI, getting to
superhuman AI will be *relatively* easy? I'd agree with that.

You say:

>... a considerable part of the information that goes into
>the modularization [of adult brains] results from self-organization and
perceptual input
>rather than from an immensely complicated genetic look-up table.

Model B would suggest that this sentence be slightly edited to:

A considerable part of the information that goes into the modularization
[of adult brains] results from self-organization *using* perceptual input
*according to* an immensely complicated genetic look-up table.

A note on sources: I didn't invent model B. I don't know who did, if
anyone. As far as I know, it is a perfectly respectable theory.

Here's an argument in favor of complexity, using a source of data you may
not have considered. The complexity of the "genetic look-up table" can be
estimated. A few years ago, Craig Ventner and company extracted the
pattern of gene expression in various tissues using EST (expressed sequence
tag) technology. About 25% of all genes are expressed *only* in the brain.
 Taken at face value, this would suggest that 1/4 of the genome is
responsible only for structuring the human mind. The number of genes in
the human genome is uncertain, but let's take the low end of the range and
say 50,000. So we will have to understand 12,500 genes, and the behavior
of their assosciated proteins, to figure out how brain development works.
That's a lot of information. Right now, unraveling the behavior of a
single gene takes about a scientist-century. Obviously, this gets shorter
as we get better tools, and to some extent we can ignore the molecular
details, but I think it's more than we can do in 15 years. You put your
trust in being able to

> know enough about the basic
>principles of how the brain works to be able to implement these
>computational paradigms on a computer, without necessarily modelling
>the brain in any biologically realistic way.

I would argue that there are no 'basic paradigms', just an annoyingly vast
pile of details.

Received on Thu Jan 8 15:01:36 1998

This archive was generated by hypermail 2.1.8 : Tue Mar 07 2006 - 14:45:29 PST