Re: poly: Software for superintelligence

From: Nick Bostrom <>
Date: Mon Jan 19 1998 - 02:15:15 PST

"Peter C. McCluskey" <> writes

> ("Nick Bostrom") writes:
> >scale well. The Hebbian learning rule, on the other hand, is perfectly
> >scaleable (it scales linearly, since each weight update only involves
> >looking at the activity of two nodes, independently of the size of the
> >network). It is known to be a major mode of learning in the brain. It
> I doubt it scales linearly. If you model it as a single neuron, and
> add one datum at a time without caring about how new data affects what
> it learned earlier, then it scales linearly but remembers poorly (probably
> forgets more than a human would). You'd probably need an O(N^2) algorithm
> to handle the interference between data adequately.

I am not sure what you mean. I didn't make the claim that Hebbian
learning is the only learning mode that occurs in the brain, or that
it would by itself be sufficient to replicate human performance. So
if you challenge that, then I agree with you. But I think that
follows trivially from the definition of the original Hebb rule that
if we hold the number of synapses per neuron fixed then the time it
takes to go through one weight update cycle for the network grows
linearly as the number of neurons increases. For in order to
determine the update of a given synapse, the Hebb rule takes as input
merely the previous weight of that synapse and the activation levels
of the neurons on both side of that synapse:

Wij' = Wij + constant*Ai*Aj

Another nice thing about Hebbian learning is that it is unsupervised:
there is no need for performance evaluation. This is in contrast to
the algorithms of the Backprop family which presuppose a "teacher"
that generates error signals which are then propagated back through
the various layers of the net. It is much harder to imagine how that
could be implemented in the brain.

Nick Bostrom
Received on Mon Jan 19 10:22:13 1998

This archive was generated by hypermail 2.1.8 : Tue Mar 07 2006 - 14:45:29 PST