Threading help?

Peter Hansen peter at engcorp.com
Wed Mar 6 22:54:09 EST 2002


Cliff Wells wrote:
> Peter Hansen wrote:
> > Cliff Wells wrote:
> > > Hm.  Okay, I had to reconsider this.  Clearly if the processing is slower
> > > than .1s and data is being added to it every .1s, the Queue is going to
> > > endlessly grow as more data is added to it.  If this is the case, it might 
> > > make sense to have more than one consumer thread (B) doing the processing.
> >
> > I might be missing something, but I don't see how adding another thread
> > (with a slight extra overhead associated with it) would actually increase
> > performance for what I infer is CPU-bound processing in thread B.
> 
> Yes, I thought of this, but I believe that this would give the net effect
> of increasing thread priority for the B threads (since they will get more
> CPU time as a whole vs threads A and C).

Ah, interesting.  I hadn't thought before of how the Python interpreter's
technique of executing a set number of instructions before switching context
would let you do something tricky like that.  Neat.

In any case, when you said "have thread A retrieve the data every .1s"
I figured this was a soft realtime situation, and stealing CPU from thread
A would not necessarily be what you need.  If between A and B you are taking
up all the time, you need to optimize the code, or do what you suggested
below and drop data (or increase the sampling period):

> But maybe a better approach would be this: if processing time is greater
> than .1s and we don't care about out-of-date data (big assumption, but not
> unreasonable), then use the Queue between A and B and simply have B empty
> the Queue on every iteration, processing only the newest data.  This would
> keep the Queue to a reasonable size at the cost of dropping data.

I suppose the acceptability of that depends on your requirements...
dropping data is never a very general approach. :-)

-Peter



More information about the Python-list mailing list