AI and cognitive psychology rant

Alex Martelli aleax at aleax.it
Tue Oct 14 12:58:45 EDT 2003


Isaac To wrote:

>>>>>> "Alex" == Alex Martelli <aleax at aleax.it> writes:
> 
>     Alex> Chess playing machines such as Deep Blue, bridge playing
>     programs Alex> such as GIB and Jack ..., dictation-taking programs
>     such as those Alex> made by IBM and Dragon Systems in the '80s (I
>     don't know if the Alex> technology has changed drastically since I
>     left the field then, Alex> though I doubt it), are based on
>     brute-force techniques, and their Alex> excellent performance comes
>     strictly from processing power.
> 
> Nearly no program would rely only on non-brute-force techniques.  On the

Yes, the temptation to be clever and cut corners IS always there...;-).

> other hand, all the machines that you have named uses some non-brute-force
> techniques to improve performance.  How you can say that they are using
> "only" brute-force techniques is something I don't quite understand.  But

In the case of Deep Blue, just hearsay.  In the case of the IBM speech
recognition (dictation-taking) system, I was in the team and knew the
code quite well; that Dragon was doing essentially the same (on those
of their systems that _worked_, that is:-) was the opinion of people
who had worked in both teams, had friends on the competitors' teams,
etc (a lot of unanimous opinions overall).  GIB's approach has been
described by Ginsberg, his author, in detail; Jack's hasn't. but the
behavior of the two programs, seen as players, is so similar that it
seems doubtful to me that their implementation techniques may be
drastically different.

Maybe we're having some terminology issue...?  For example, I consider
statistical techniques "brute force"; it's not meant as pejorative --
they're techniques that WORK, as long as you can feed enough good data
to the system for the statistics to bite.  A non-brute-force model of
natural language might try to "understand" some part of the real world
that an utterance is about -- build a semantic model, that is -- and
use the model to draw some part of the hypotheses for further prediction
or processing; a purely statistical model just sees sequences of
symbols (words) in (e.g.) a Hidden Markov Model from which it takes
all predictions -- no understanding, no semantic modeling.  A non-bf
bridge playing program might have concepts (abstractions) such as "a
finesse", "a squeeze", "drawing trumps", "cross-ruffing", etc, and
match those general patterns to the specific distribution of cards to
guide play; GIB just has a deterministic double-dummy engine and guides
play by montecarlo samples of possible distributions for the two
unseen hands -- no concepts, no abstractions.

> even then, I can't see why this has anything to do with whether the
> machines
> are intelligent or not.  We cannot judge whether a machine is intelligent
> or
> not by just looking at the method used to solve it.  A computer is best at

That's not how I read the AAAI's page's attempt to describe AI.

> number crunching, and it is simply natural for any program to put a lot
> more
> weights than most human beings on number crunching.  You can't say a
> machine
> is unintelligent just because much of it power comes from there.  Of
> course, you might say that the problem does not require a lot of
> intelligence.

Again, that't not AAAI's claim, as I read their page.  If a problem
can be solved by brute force, it may still be interesting to "model
the thought processes" of humans solving it and implement _those_ in
a computer program -- _that_ program would be doing AI, even for a
problem not intrinsically "requiring" it -- so, AI is not about what
problem is being solved, but (according to the AAAI as I read them)
also involves considerations about the approach.


> Whether a system is intelligent must be determined by the result.  When
> you feed a chess configuration to the big blue computer, which any average
> player of chess would make a move that will guarantee checkmate, but the
> Deep Blue computer gives you a move that will lead to stalemate, you know
> that it is not very intelligent (it did happen).

That may be your definition.  It seems to me that the definition given
by a body comprising a vast numbers of experts in the field, such as
the AAAI, might be considered to carry more authority than yours.


Alex





More information about the Python-list mailing list