multiprocessing speedup

Oscar Benjamin oscar.j.benjamin at gmail.com
Tue Sep 29 10:12:39 EDT 2015


On Tue, 29 Sep 2015 at 02:22 Rita <rmorgan466 at gmail.com> wrote:

> I am using the multiprocessing with apply_async to do some work. Each task
> takes a few seconds but I have several thousand tasks. I was wondering if
> there is a more efficient method and especially when I plan to operate on a
>  large memory arrays (numpy)
>

> Here is what I have now
>
import multiprocessing as mp
> import random
>
> def f(x):
>     count=0
>     for i in range(x):
>         x=random.random()
>         y=random.random()
>         if x*x + y*y<=1:
>             count+=1
>
>     return count
>

I assume you're using the code shown as a toy example of playing with the
multiprocessing module? If not then the function f can be made much more
efficient.

The problem is that while it's good that you have distilled your problem
into a simple program for testing it's not really possible to find a more
efficient way without finding the bottleneck which means looking at the
full problem.

--
Oscar
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-list/attachments/20150929/a1d1bc52/attachment.html>


More information about the Python-list mailing list