Markov.py

Carel Fellinger cfelling at iae.nl
Thu Jan 25 15:43:40 EST 2001


Alex Martelli <aleaxit at yahoo.com> wrote:
> "Carel Fellinger" <cfelling at iae.nl> wrote in message
> news:94n920$e66$1 at animus.fel.iae.nl...
>     [snip]
>> Impressive! (I take it it did single words only, and I guess it needed
>> training to get a good profil of the speaker, but still impressive)

> Real-time speech but not continuous one (short-as-you-wish,
> but mandatory, pause between words -- we called it 'dictation'
> speech), and 15-minutes training needed per speaker; took a
> dedicated IBM mainframe with a few CPU's and 'vector features',
> or equivalent (3rd party vector-processing units for the same
> class of mainframe).  It _was_ 1985 before 'we' (actually
> some brilliant HW guys in IBM Research) turned out special-

Ah that project (mid 80-s IBM Research), it's claims *stunned* us at
the time!

> purpose boards that you could stuff inside an IBM PC/AT to
> make it perform the same task (maybe even early '86 before
> the latter box was reliable enough to announce & demo) --
> what we contributed to the miniaturization effort was a
> study of numeric sensitivity, to turn the computations into
> fixed-point ones at various minimal-sufficient precisions
> along the algorithms' paths.


>> But if I remember correctly, the use of frequency analysis to
>> recognize spoken words was not what was criticised at all. We (the

> You don't (remember correctly) -- being the target of such
> criticisms sharpens one's memories:-).  I remember silencing

Well I should have spoken only for our project then:), and within
Philips Research there were some clever man working on continous
speech recognition using stochastic models too, so we were perfectly
aware of the feasability of the approach.  But still we thought that
some decent grammar might be usefull for the weeding of ambiguities:)

> for occasional [and inadvertent] lack-of-pause between two
> adjacent words... which occasionally made HER go wrong, too!).

striking similarity, makes me wonder: your father build a bot before?

>> members of the Rosetta team) felt a bit awkward by the claim that
>> stochastic models were better suited to attach meaning to (written)
>> natural language and even to weed ambigueties.  Grammars, on the other
>> hand, are soo crisp and clear, hence easy to understand:) Probably had

> Right -- and therefore insufficient to model what human beings
> actually DO with language:-).

yep, and we were painfully aware of that, we refrained from even
beginning to try to translate novels, let alone poetry.  And then that
dreadfull `context' in which an utterance had to be interpreted, no
way to model *that* in sufficient detail (or actually: at all), so we
left it out:)


-- 
groetjes, carel



More information about the Python-list mailing list