auto-correct a speech-to-text output and relate to of the words based on syllables

naveen at emagevisionpl.com naveen at emagevisionpl.com
Thu Feb 1 03:50:28 EST 2018


Hi,

I have to make an application in which, 

The user asks a question, Google's API is used to convert the speech to text. But the problem is due to different accent the translator misunderstands the words. 
I want my application to guess the word to the nearest word spoken by the user. 
When the user speaks a word, the syllables in the word should be matched with the syllables of other words in my dictionary. 
Till now what I have done,
I used CMU pronouncing dictionary. It contains ~122300 words along with its syllables.
Along with this, I have created my own dictionary which consists of words that I need only.
I have implemented a python logic that makes reasonable guesses for the words user speaks and matches with my dictionary.
But sometimes for more than a word it makes the same guess.
example:
The user speaks "Light". The system translates it as "Bright"
The user speaks "White" The system translates it as "Bright"
I don't want this to happen. This system should make a reasonable conversion.

Any help would be appreciated.

I am using python for logic.

Regards,
Naveen BM



More information about the Python-list mailing list