AI and cognitive psychology rant (was Re: BIG successes of Lisp...)

Stephen Horne $$$$$$$$$$$$$$$$$ at $$$$$$$$$$$$$$$$$$$$.co.uk
Mon Oct 13 23:13:03 EDT 2003


On 13 Oct 2003 16:43:57 -0700, Paul Rubin
<http://phr.cx@NOSPAM.invalid> wrote:

>mike420 at ziplip.com writes:

>> b. AFAIK, Yahoo Store was eventually rewritten in a non-Lisp. 
>> Why? I'd tell you, but then I'd have to kill you :)
>
>The Yahoo Store software was written by some small company that sold
>the business to some other company that didn't want to develop in
>Lisp, I thought.  I'd be interested to know more.

In a context like that, my first guess would be "the other company
didn't have any Lisp programmers". The second would be that the
programmers they did have didn't like (their idea of) Lisp.

Assuming there is some truth in that, it would probably have been
rationalised in other terms of course.

>> Another big failure that is often _attributed_ to Lisp is AI,
>> of course. But I don't think one should blame a language
>> for AI not happening. Marvin Mins ky, for example, 
>> blames Robotics and Neural Networks for that. 
>
>Actually, there are many AI success stories, but the AI field doesn't
>get credit for them, because as soon as some method developed by AI
>researchers becomes useful or practical, it stops being AI.  Examples
>include neural networks, alpha-beta search, natural language
>processing to the extent that it's practical so far, optical character
>recognition, and so forth.

Absolutely true - and many more.

I have come to believe that cognitive psychologists (of whatever field
- there seems to have been a 'cognitive revolution' in most variants
of psychology) should have some experience of programming - or at
least of some field where their conception of what is possible in
terms of information processing would get some hard testing.

Brains are not computers, of course - and even more important, they
were evolved rather than designed. But then a lot of cognitive
psychologists don't seem to take any notice of the underlying
'hardware' at all. In the field I'm most interested in - autism - the
main cognitive 'theories' are so sloppy that it is very difficult to
pick out the 5% of meaningfullness from the 95% of crud.

Take 'theory of mind', for instance - the theory that autistic people
have no conception that other people have minds of their own. This is
nice and simple at 'level 1' - very young autistics seem not to keep
track of other peoples states of mind when other children of the same
age do, just as if they don't realise that other people have minds of
their own. But the vast majority of autistics pass level 1 theory of
mind tests, at least after the age of about 5.

The cognitive psychologists answer to this is to apply level 2 and
higher theory of mind tests. They acknowledge that these tests are
more complex, but they don't acknowledge that they are testing
anything different. But take a close look and it rapidly becomes
apparent that the difference between these tests and the level 1 tests
is simply the amount of working memory that is demanded.

Actually, you don't have to study the tests to realise that - just
read some of the autistic peoples reactions to the tests. They
*understand* the test, but once the level is ramped up high enough
they can't keep track of everything they are expected to keep track
of. It's rather like claiming that if you can't simultaneously
remember 5000 distinct numbers you must lack a 'theory of number'!

But when autistic people have described things that suggest they have
'theory of mind' (e.g. Temple Grandin), the experts response has
typically been to suggest that either the auther or her editor is a
liar (e.g. Francesca Happé, 'the autobiographical writings of three
Asperger syndrome adults: problems of interpretation and implications
for theory', section in 'Autism and Asperger Syndrome' edited by Uta
Frith).

It's not even as if the 'theory of mind' idea has any particular
predictive power. The symptoms of autism vary dramatically from one
person to another, and the distinct symptoms vary mostly independently
of one another. Many symptoms of autism (e.g. sensory problems) have
nothing to do with awareness of other people.

The basic problem, of course, is that psychologists often lean more
towards the subjective than the objective, and towards intuition
rather than logic. It is perhaps slightly ironic that part of my own
theory of autism (integrating evolutionary, neurological and cognitive
issues) has as a key component an IMO credible answer to 'what is
intuition, how does it work, and why is (particularly social)
intuition disrupted in autism?'

But despite all this, it is interesting to note that cognitive
psychology and AI have very much the same roots (even down to mainly
the same researchers in the early days) and that if you search hard
enough for the 'good stuff' in psychology, it doesn't take long before
you start finding the same terms and ideas that used to be strongly
associated with AI.

Creating a machine that thinks like a person was never a realistic
goal for AI (and probably won't be for a very long time yet), but that
was always the dreaming and science fiction rather than the fact. Even
so, it is hard to think of an AI technique that hasn't been adopted by
real applications.

I remember the context where I first encountered Bayes theorem. It was
in AI - expert systems, to be precise - along with my first encounter
with information theory. The value of Bayes theorem is that it allows
an assessment of the probability of a hypothesis based only on the
kinds of information that can be reasonably easily assessed in
studies, approximated by a human expert, or even learned by the expert
system in some contexts as it runs. The probability of a hypothesis
given a single fact can be hard to assess, but a good approximation of
the probability of that fact given the hypothesis is usually easy to
assess given a simple set of examples.

Funny how a current popular application of this approach (spam
filtering) is not considered to be an expert system, or even to be AI
at all. But AI was never meant to be in your face. Software acts more
intelligently these days, but the methods used to achieve that are
mostly hidden.


-- 
Steve Horne

steve at ninereeds dot fsnet dot co dot uk




More information about the Python-list mailing list