multiprocessing shows no benefit

Jason jasonhihn at gmail.com
Tue Oct 17 10:52:17 EDT 2017


I've got problem that I thought would scale well across cores.

def f(t):
   return t[0]-d[ t[1] ]

d= {k: np.array(k) for k in entries_16k }
e = np.array()

pool.map(f, [(e, k) for k in d]

At the heart of it is a list of ~16k numpy arrays (32 3D points) which are stored in a single dict.  Using pool.map() I pass the single item of 32 3D Points to  be evaluated again the 16k entries. In theory, this would be a speedup proportional to the number of physical cores, but I see all 4 cores being maxed out and results in a regular map time. 

How can I use pool.map better?



More information about the Python-list mailing list