[issue11314] Subprocess suffers 40% process creation overhead penalty
STINNER Victor
report at bugs.python.org
Wed Mar 2 13:52:35 CET 2011
STINNER Victor <victor.stinner at haypocalc.com> added the comment:
pitrou> Victor, your patch doesn't even apply on 3.x.
pitrou> That code doesn't exist anymore...
subprocess.Popen() does still read errpipe_read, but using a buffer of 50,000 bytes instead of 1 MB (the traceback is not more send to the parent process).
Benchmark on Python 3.2 (debug build, same computer than msg129880):
- fork + execv + waitpid: 20052.0 ms
- os.popen: 40241.7 ms
- subprocess.Popen (C): 28467.2 ms
- subprocess.Popen (C, close_fds=False): 22145.4 ms
- subprocess.Popen (Python): 40351.5 ms
Bad:
- os.popen is 41% is slower than subprocess: I suppose that it is the usage of stdout=PIPE (creation of the pipe) which make it slower. But 41% is huge just to create a pipe (without writing into it)!
- subprocess(close_fds=True) (default) is 22% slower than subprocess(close_fds=False)
- os.popen of Python 3 is 56% slower than os.popen of Python 2
Good:
- subprocess of Python 3 is 29% faster than subprocess of Python 2.
Other results:
- subprocess of Python 3 is 9% slower than patched subprocess of Python 2.
- subprocess (default options) is 42% slower than fork+execv+waitpid (this is close to the Python 2 overhead).
- subprocess implemented in Python is 42% slower than the C implementation of subprocess.
pitrou> Looks like there's a regression on both os.popen and subprocess.popen.
os.popen() uses subprocess in Python 3. The worst regression is "os.popen of Python 3 is 56% slower than os.popen of Python 2". I don't think that it is related to Unicode because my benchmark doesn't write or read any data.
----------
_______________________________________
Python tracker <report at bugs.python.org>
<http://bugs.python.org/issue11314>
_______________________________________
More information about the Python-bugs-list
mailing list