parsley parsing question, how to make a variable grammar

Eric S. Johansson esj at harvee.org
Mon Jun 16 12:02:03 EDT 2014


On 6/14/2014 8:10 PM, Michael Torrie wrote:
> On 06/13/2014 03:05 PM, Eric S. Johansson wrote:
>> I appreciate any insight before I go too far off track.
>> --- eric
> Perhaps this is off-topic, and doesn't answer your question, but is
> Parsley a natural language parsing tool?  If not, and if it is natural
> language that you're trying to parse, maybe you should see if the
> natural language toolkit would be more appropriate to your needs.

Natural language is a rathole that many people go down when trying to 
build the speech user interface. In reality, all you need is something 
vaguely resembling normal language use. Something close enough to 
natural language that it's easy to remember but succinct enough that you 
don't burn out your speech capabilities. An example of this is my task 
log. It looks like this

16-Jun-2014 11:46 esj: start
I did something today. No really. I started work on time and I finished 
on time
16-Jun-2014 15:46 esj: end
day: 4 hours
...
week: ...

The speech grammar is "job stamp (start| end)". "Job stamp start" simply 
puts in the timestamp as seen above. "Job stamp end" adds the ending 
time stamp plus the day hours calculation. When I produce a report, the 
day and week numbers are recalculated in case I changed something 
manually. The calculation is then displayed so I can build an invoice.

This model is speech friendly on two levels. Grammar is simple and 
automates is much as it can. Second, if I didn't have the grammar and 
macro capability, I could still speak the magic keywords without too 
much stress. There's another level of speech friendliness. I also have 
"time  stamp"11:57, "date stamp" Jun 16, 2014, and "log stamp" 
16-Jun-2014 esj: since they are all related and similar in construction, 
it's easier to remember.

There's another aspect to natural language that is a bit of rathole. In 
my opinion, a good speech interface uses a combination of visual and 
speech elements. For example you should be of the see something on the 
screen and speak to it. This implies that the speech application can 
read the contents of the visual application so it can make the right 
decisions regarding grammar.

This usually isn't possible because visual applications don't reveal 
appropriate content at the right level. The best we can do now is a bit 
of screen and menu scraping of highly decimated information that has 
lost all context.  grumble




More information about the Python-list mailing list