"Richard Schroeppel" <rcs@cs.arizona.edu> writes:
> In his article about "How Long Till Superintelligence", Nick Bostrom sez ...
>
> > scale well. The Hebbian learning rule, on the other hand, is perfectly
> scaleable (it scales linearly, since each weight update only involves
> looking at the activity of two nodes, independently of the size of the
> network). It is known to be a major mode of learning in the brain. It
>
> What's really known about the role of the Hebbian rule?
> Are there gerbil experiments? Cat scans of people? Etc?
> Any info relevant to cryonics & memory survival?
Hebb rules! :-) At least that was my impression at the hippocampus
modelling workshop I attended and the general consensus among most
neuroscientists I have met.
The strongest evidence for that it is used in the brain is LTP, Long
Term Potentiation, which means that synapses that are activated with
sufficiently strong stimuli which make the postsynaptic cell fire will
increase in strength (there is also association; synapses that signal
at the same time will be strengthened). This has been demonstrated as
far as I know in practically all species, although it is not as easy
to induce in different parts of the brain. Some have complained about
that most experiments are done in vitro and hence may not be a
realistic prediction of what happens in the brain, but there is good
data that LTP can at least be induced in vivo (detecting it naturally
is much harder, since you would have to be *very* lucky to insert
electrodes in two cells that would have a co-firing episode - the
activity is rather sparse).
We do not yet know how stable LTP is, or if it is the real substrate
of memory; it is the favored candidate for early phases of memory, but
later phases seems to depend upon the neurons changing shape, adding
spines and growing new synapses. I'm therefore very interested in if
the Prometheus project manages to freeze hippocampal slices and thaw
them to check if LTP persists - that would be a great step forward.
Hebbian change seems to be used in the brain, Michael Merzenich has
show that there seems to be hebbian stuff going on when he re-trains
parts of the cortex. No strong PET data that shows Hebbian properties
yet, but plenty of evidence from changes in cortical maps etc.
On a more general level, the Hebbian rule works well for learning. I
speak from personal experience, since I have spent half a year
exploring a learning rule derived from Hebbian assumptions and
Bayesian statistics. It behaves oddly, but it turns out that every bug
I think I discover is really a feature :-) Despite being 100%
theoretical and intended for ANNs, it makes some predictions for how
synaptic weights should behave which has been borne out by observation
(long tonic stimulation makes LTP weaker than short stimulation; blind
luck or a good theory? We'll see...).
-- ----------------------------------------------------------------------- Anders Sandberg Towards Ascension! asa@nada.kth.se http://www.nada.kth.se/~asa/ GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !yReceived on Thu Jan 15 09:50:03 1998
This archive was generated by hypermail 2.1.8 : Tue Mar 07 2006 - 14:45:29 PST