[Python-ideas] reducing multiprocessing.Queue contention

Eli Bendersky eliben at gmail.com
Thu Jan 24 14:25:08 CET 2013


On Wed, Jan 23, 2013 at 12:03 PM, Charles-François Natali <
cf.natali at gmail.com> wrote:

> > In general, this sounds good. There's indeed no reason to perform the
> > serialization under a lock.
> >
> > It would be great to have some measurements to see just how much it
> takes,
> > though.
>
> I was curious, so I wrote a quick and dirty patch (it's doesn't
> support timed get()/put(), so I won't post it here).
>
> I used the attached script as benchmark: basically, it just spawns a
> bunch of processes that put()/get() to a queue some data repeatedly
> (10000 times a list of 1024 ints), and returns when everything has
> been sent and received.
>
> The following tests have been made on an 8-cores box, from 1 reader/1
> writer up to 4 readers/4 writers (it would be interesting to see with
> only 1 writer and multiple readers, but readers would keep waiting for
> input so it requires another benchmark):
>
> Without patch:
> """
> $ ./python /tmp/multi_queue.py
> took 0.7993290424346924 seconds with 1 workers
> took 1.8892168998718262 seconds with 2 workers
> took 3.075777053833008 seconds with 3 workers
> took 4.050479888916016 seconds with 4 workers
> """
>
> With patch:
> """
> $ ./python /tmp/multi_queue.py
> took 0.7730131149291992 seconds with 1 workers
> took 0.7471320629119873 seconds with 2 workers
> took 0.752316951751709 seconds with 3 workers
> took 0.8303961753845215 seconds with 4 workers
> """
>

Looks great, what are you waiting for :-)

Eli
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20130124/e188a7c0/attachment.html>


More information about the Python-ideas mailing list