AI and cognitive psychology rant

Robert Brewer fumanchu at amor.org
Tue Oct 14 17:08:54 EDT 2003


> Maybe we're having some terminology issue...?  For example, I consider
> statistical techniques "brute force"; it's not meant as pejorative --
> they're techniques that WORK, as long as you can feed enough good data
> to the system for the statistics to bite.  A non-brute-force model of
> natural language might try to "understand" some part of the real world
> that an utterance is about -- build a semantic model, that is -- and
> use the model to draw some part of the hypotheses for further pre-
> diction or processing; a purely statistical model just sees sequences
of
> symbols (words) in (e.g.) a Hidden Markov Model from which it takes
> all predictions -- no understanding, no semantic modeling.  A non-bf
> bridge playing program might have concepts (abstractions) such as "a
> finesse", "a squeeze", "drawing trumps", "cross-ruffing", etc, and
> match those general patterns to the specific distribution of cards to
> guide play; GIB just has a deterministic double-dummy engine 
> and guides play by montecarlo samples of possible distributions
> for the two unseen hands -- no concepts, no abstractions.

I've always understood the difference from two separate points of view.
On one level, you must have a device, a machine, which is built by
intelligent humans. There's no getting around this, but it's easy for
even the "emergent intelligence" crowd to get side-tracked by people who
think that, just because the *substrate* has been designed, that
therefore the entire beastie is nothing more than an expert system.

However, there are systems built which do possess the
concepts/abstractions which you mention. A fantastic example is
NOMAD/Darwin, which is a real attempt at hooking learning software to a
physical substrate: the feedback is physical, not virtual. cf.
http://www.nsi.edu/public/news/pr20000000.php I cornered Jim Snook,
gosh, two years ago now, and sat riveted for an hour, listening to him
outline everything I've ever wanted to put into such a system (but
haven't had the means; it's a purely academic hobby of mine)--their
approach seems spot-on to me.

Yes, any AI system must be realized in hardware at some point. I think
the emergent-properties folks see something fundamentally different than
the expert-systems crowd: that at some point, AI builders must take
their hands off the reins and let a well-designed system design itself
for a while.

Not sure what this has to do with Python. :)


Robert Brewer
MIS
Amor Ministries
fumanchu at amor.org





More information about the Python-list mailing list