Re: poly: Software for superintelligence

From: Nick Bostrom <>
Date: Thu Jan 08 1998 - 13:49:58 PST

 Hal Finney <> wrote:

> > One way for it to be unexpectedly difficult to achieve
> > human-level AI through the neural network approach would be if it
> > turned out that the human brain relies on a colossal amount of genetic
> > hardwiring, so that each cognitive function depends on a unique and
> > hopelessly complicated inborn architecture, generated over the aeons
> > in the evolutionary learning process of our species.
> I think this is a strawman. The problem could be extremely difficult
> without their being a "colossal" amount of genetic structure. It only
> takes a few bits of information before trial-and-error becomes hopelessly
> inadequate as a strategy for finding the right structure.

Good point. In general, I tend to agree with your remarks. Many of
them concerned differences of empathsis, which are partly explicable
from the purpose for which the paper was written that included the
section. I wanted to "outline the case" for believing that
superintelligence will arrive within the first third of the next
century; so I focused on the positive side and de-empathized the

Nick Bostrom
Received on Thu Jan 8 21:44:07 1998

This archive was generated by hypermail 2.1.8 : Tue Mar 07 2006 - 14:45:29 PST