Python music sequencer timing problems

badmuthahubbard badmuthahubbard at gmail.com
Wed Dec 10 11:12:40 EST 2008


I've been trying to get the timing right for a music sequencer using
Tkinter.  First I just loaded the Csound API module and ran a Csound
engine in its own performance thread.  The score timing was good,
being controlled internally by Csound, but any time I moved the mouse
I got audio dropouts.
It was suggested I run the audio engine as a separate process, with
elevated/realtime priority and use sockets to tell it what to play,
and that way, too, people could set up servers for the audio on
different CPUs.  But I've found that the method I came up with for
timing the beats/notes is too slow (using threading.Timer on a
function that calls itself over and over), and the whole thing played
too slowly (and still gave me noise when moving the mouse).  I've been
using subprocesses, but I'm now wondering if sockets would or could
make a difference.

The overall goal is this: when the user wants to audition a piece,
create an audio engine process with elevated/realtime priority.  This
engine also has all the synthesis and sound processing rules for the
various instruments, due to the way Csound is structured.  Set up a
scheduler- possibly in another process, or just another thread- and
fill it with all the notes from the score and their times.  Also, the
user should be able to see a time-cursor moving across the piece so
they can see where they are in the score.  As this last bit is GUI,
the scheduler should be able to send callbacks back to the GUI as well
as notes to the audio engine.  But neither the scheduler nor the audio
engine should wait for Tkinter's updating of the location of the time-
cursor.  Naturally, all notes will have higher priorities in the
scheduler than all GUI updates, but they won't necessarily always be
at the same time.

So, I have a few ideas about how to proceed, but I want to know if
I'll need to learn more general things first:
1.
Create both the scheduler and the audio engine as separate processes
and communicate with them through sockets.  When all events are
entered in the scheduler, open a server socket in the main GUI process
and listen for callbacks to move the cursor (is it possible to do this
using Tkinter's mainloop, so the mouse can be moved, albeit
sluggishly, at the same time the cursor is moving continuously?); the
audio engine runs at as high priority as possible, and the scheduler
runs somewhere between that and the priority of the main GUI, which
should even perhaps be temporarily lowered below default for good
measure.

or

2.
Create the audio engine as an elevated priority process, and the
scheduler as a separate thread in the main process.  The scheduler
sends notes to the audio engine and callbacks within its own process
to move the GUI cursor.  Optionally, every tiny update of the cursor
could be a separate thread that dies an instant later.

3.
Closer to my original idea, but I'm hoping to avoid this.  All note
scheduling and tempo control is done by Csound as audio engine, and a
Csound channel is set aside for callbacks to update the cursor
position.  Maybe this would be smoothest, as timing is built into
Csound already, but the Csound score will be full of thousands of
pseudo-notes that only exist for those callbacks.  Down the road I'd
like to have notes sound whenever they are added or moved on the
score, not just when playing the piece, as well as the option of
adjusting the level, pan, etc. of running instruments.

It seems method 2 runs the risk of slowing down the timing of the
notes if the mouse moves around; but method 1 would require setting up
an event loop to listen for GUI updates from the scheduler.  I was
trying method 1 with subprocesses, but reading from the scheduler
process's stdout PIPE for GUI updates wasn't working.  I was referred
to Twisted and the code module for this, and haven't yet worked out
how to use them appropriately.

I don't mind a complex solution, if it is reliable (I'm aiming at
cross-platform, at least WinXP-OSX-Linux), but everything I try seems
to add unnecessary complexity without actually solving anything.  I've
been reading up on socket programming, and catching bits here and
there about non-blocking IO.  Seem like good topics to know about, if
I want to do audio programming, but I also need a practical solution
for now.

Any advice?

Thanks a lot.
-Chuckk



More information about the Python-list mailing list