Using threads for audio computing?
Roy Smith
roy at panix.com
Sun May 11 10:40:49 EDT 2014
In article <536f869c$0$2178$426a74cc at news.free.fr>,
lgabiot <lgabiot at hotmail.com> wrote:
> Hello,
>
> I'd like to be able to analyze incoming audio from a sound card using
> Python, and I'm trying to establish a correct architecture for this.
>
> Getting the audio is OK (using PyAudio), as well as the calculations
> needed, so won't be discussing those, but the general idea of being able
> at (roughly) the same time: getting audio, and performing calculation on
> it, while not loosing any incoming audio.
> I also make the assumption that my calculations on audio will be done
> faster than the time I need to get the audio itself, so that the
> application would be almost real time.
>
>
> So far my idea (which works according to the small tests I did) consist
> of using a Queue object as a buffer for the incoming audio and two
> threads, one to feed the queue, the other to consume it.
>
>
> The queue could store the audio as a collection of numpy array of x samples.
> The first thread work would be to put() into the queue new chunks of
> audio as they are received from the audio card, while the second would
> get() from the queue chunks and perform the necessary calculations on them.
>
> Am I in the right direction, or is there a better general idea?
>
> Thanks!
If you are going to use threads, the architecture you describe seems
perfectly reasonable. It's a classic producer-consumer pattern.
But, I wonder if you even need anything this complicated. Using a queue
to buffer work between threads makes sense if the workload presented is
uneven. Sometimes you'll get a burst of work all at once and don't have
the capacity to process it in real-time, so you want to buffer it up.
I would think sampling audio would be a steady stream. Every x ms, you
get another chunk of samples, like clockwork. Is this not the case?
More information about the Python-list
mailing list