multiprocessing speedup

Rita rmorgan466 at gmail.com
Mon Sep 28 21:20:16 EDT 2015


I am using the multiprocessing with apply_async to do some work. Each task
takes a few seconds but I have several thousand tasks. I was wondering if
there is a more efficient method and especially when I plan to operate on a
 large memory arrays (numpy)

Here is what I have now


import multiprocessing as mp
import random

def f(x):
    count=0
    for i in range(x):
        x=random.random()
        y=random.random()
        if x*x + y*y<=1:
            count+=1

    return count

def main():
    resultObj=[]
    n=10000
    P=mp.Pool(2)
    for arg in xrange(n):
        resultObj.append(P.apply_async(f,(arg,)))
    P.close()
    P.join()
    result = [ i.get() for i in resultObj ]
    print sum(result)/(n)

if __name__=="__main__":
    main()

1) Does multiprocessing do a fork for each task?
2) If so, I assume thats costly due to setup and teardown. Would this be
the case?
3) I plan to pass large arrays to function,f, therefore is there a more
efficient method to achieve this?





-- 
--- Get your facts first, then you can distort them as you please.--
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-list/attachments/20150928/73e311ba/attachment.html>


More information about the Python-list mailing list