[issue17038] multiprocessing only use one core when C module are imported

Charles-François Natali report at bugs.python.org
Sat Jan 26 13:01:58 CET 2013


Charles-François Natali added the comment:

> Indeed some value change when I print "cpu" line from /proc/self/status but I don't really understand what that mean...

It means that the CPU affinity is changed, so the scheduler binds your
process to a subset of the available CPUs.

> So there is no solution about that ? We cannot use multiprocessing with these modules under Linux ?
>
> Do you think I can manually change the CPU affinity or at least block changes made by other modules (without recompiling them of course) ?

If you don't want to recompile the offending module (it seems to be
GotoBLAS2), you can change the affinity after importing your modules
with something like:
"""
os.system("taskset -p 0xff %d" % os.getpid())
"""

(Python 3.3 exposes the scheduling API natively, see
http://docs.python.org/dev/library/os.html#interface-to-the-scheduler)

> I guess if it's possible to modify CPU affinity, numpy and other scientific libraries won't be efficient as before, no ? Because the won't share the same cache or something like that, if I get the wikipedia page about cpu affinity.

Soft affinity is naturally done by the operating system scheduler, so
you don't have to worry about cache line bouncing and the like. Hard
affinity is only useful in very specific cases.

----------

_______________________________________
Python tracker <report at bugs.python.org>
<http://bugs.python.org/issue17038>
_______________________________________


More information about the Python-bugs-list mailing list