Peter C. McCluskey wrote:
> And if you hold the number of hidden nodes per backprop neuron fixed,
> the time it takes per weight update cycle also scales linearly, and
> the results are probably about as good as with the Hebb rule. It's
> only when you measure the ability of backprop or the Hebb rule to do
> something usefull that they scale up poorly. Which makes me confused
> about what, if anything, you think your analysis is relevant to.
Ok, now I see what you mean. Yes, it is not that the time it takes
for one cycle of weight updating scales differently for Backprop vs.
Hebb, the important difference is rather that Backprop is supervised
and Hebb's rule isn't. So backprop requires an evaluation function of
behavioural output on the neuronal level, which gives signed error
values for the activation level of all our output (motor?) neurons.
How do we write down such a function? (And even if the function were
available, it would probably be infeasible to train a network the
size of the human brain. It would have to be modularized.)
The Hebb rule doesn't require us to define an error function. The
difficulty is that it is not clear that it is sufficient to make the
network behave as an intelligent human adult. At least we have to add
reward learning (where we have an error function that is just defined
on the global level, like giving the child candy when it has done
something good, rather than on the neuronal level as in Backprop).
Perhaps these two learning modes, together with some clever
hard-wiring, will suffice.
I have a scheme for how the brain might store complex representations
in long-term memory, after a one-shot presentation, using only
Hebbian learning. I'm not sure this is the right place to discuss it,
but perhaps someone could recommend an appropriate forum for such
issues?
________________________________________________
Nick Bostrom
http://www.hedweb.com/nickb
Received on Fri Jan 23 00:17:42 1998
This archive was generated by hypermail 2.1.8 : Tue Mar 07 2006 - 14:45:29 PST