multiprocessing speedup

Rita rmorgan466 at gmail.com
Tue Sep 29 20:50:43 EDT 2015


Thanks for the responses.

I will create another thread to supply a more realistic example.


On Tue, Sep 29, 2015 at 10:12 AM, Oscar Benjamin <oscar.j.benjamin at gmail.com
> wrote:

> On Tue, 29 Sep 2015 at 02:22 Rita <rmorgan466 at gmail.com> wrote:
>
>> I am using the multiprocessing with apply_async to do some work. Each
>> task takes a few seconds but I have several thousand tasks. I was wondering
>> if there is a more efficient method and especially when I plan to operate
>> on a  large memory arrays (numpy)
>>
>
>> Here is what I have now
>>
> import multiprocessing as mp
>> import random
>>
>> def f(x):
>>     count=0
>>     for i in range(x):
>>         x=random.random()
>>         y=random.random()
>>         if x*x + y*y<=1:
>>             count+=1
>>
>>     return count
>>
>
> I assume you're using the code shown as a toy example of playing with the
> multiprocessing module? If not then the function f can be made much more
> efficient.
>
> The problem is that while it's good that you have distilled your problem
> into a simple program for testing it's not really possible to find a more
> efficient way without finding the bottleneck which means looking at the
> full problem.
>
> --
> Oscar
>



-- 
--- Get your facts first, then you can distort them as you please.--
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-list/attachments/20150929/5e5a00f7/attachment.html>


More information about the Python-list mailing list