Ending data exchange through multiprocessing pipe

Michal Chruszcz mchruszcz at gmail.com
Thu Apr 23 05:15:53 EDT 2009


On Apr 22, 10:30 pm, Scott David Daniels <Scott.Dani... at Acm.Org>
wrote:
> Michal Chruszcz wrote:
> > ... First idea, which came to my mind, was using a queue. I've got many
> > producers (all of the workers) and one consumer. Seams quite simple,
> > but it isn't, at least for me. I presumed that each worker will put()
> > its results to the queue, and finally will close() it, while the
> > parent process will get() them as long as there is an active
> > subprocess....
>
> Well, if the protocol for a worker is:
>      <someloop>:
>           <calculate>
>           queue.put(result)
>      queue.put(<worker_end_sentinel>)
>      queue.close()
>
> Then you can keep count of how many have finished in the consumer.

Yes, I could, but I don't like the idea of using a sentinel, if I
still need to close the queue. I mean, if I mark queue closed or close
a connection through a pipe why do I still have to "mark" it closed
using a sentinel? From my point of view it's a duplication. Thus I
dare to say multiprocessing module misses something quite important.

Probably it is possible to retain a pipe state using a multiprocessing
manager, thus omitting the state exchange duplication, but I haven't
tried it yet.

Best regards,
Michal Chruszcz



More information about the Python-list mailing list