Re: poly: Software for superintelligence

From: Peter C. McCluskey <pcm@rahul.net>
Date: Sun Jan 18 1998 - 12:34:14 PST

 bostrom@ndirect.co.uk ("Nick Bostrom") writes:
>scale well. The Hebbian learning rule, on the other hand, is perfectly
>scaleable (it scales linearly, since each weight update only involves
>looking at the activity of two nodes, independently of the size of the
>network). It is known to be a major mode of learning in the brain. It

 I doubt it scales linearly. If you model it as a single neuron, and
add one datum at a time without caring about how new data affects what
it learned earlier, then it scales linearly but remembers poorly (probably
forgets more than a human would). You'd probably need an O(N^2) algorithm
to handle the interference between data adequately.
 If, on the other hand, you make a network of neurons, then the learning
scales linearly if you add each new datum to the next available neuron,
but then retrieval times also go up linearly. To get something whose
retrieval times are as good a human, I'd guess you need will need something
that scales as O(N*log(N)) ay best.

-- 
------------------------------------------------------------------------
Peter McCluskey          | caffeine   O   CH3
pcm@rahul.net            |            ||  |
http://www.rahul.net/pcm |      H3C   C   N
                         |         \ / \ / \
                         |          N   C   C
                                    |   ||  ||
                                    C   C---N
                                  // \ /
                                  O   N
                                      |
                                      CH3
Received on Sun Jan 18 20:35:41 1998

This archive was generated by hypermail 2.1.8 : Tue Mar 07 2006 - 14:45:29 PST