From victor.stinner at gmail.com Sun Aug 7 21:38:39 2016 From: victor.stinner at gmail.com (Victor Stinner) Date: Mon, 8 Aug 2016 03:38:39 +0200 Subject: [Speed] Tracking memory usage In-Reply-To: References: Message-ID: 2016-07-30 19:48 GMT+02:00 Armin Rigo : > Hi Victor, > > Fwiw, there is some per-OS (and even apparently > per-Linux-distribution) solution mentioned here: > http://stackoverflow.com/questions/774556/peak-memory-usage-of-a-linux-unix-process > > For me on Arch Linux, "/usr/bin/time -v CMD" returns a reasonable > value in "Maximum resident set size (kbytes)". I guess that on OSes > where this works, it gives a zero-overhead, exact answer. Oh, I guess that it uses the ru_maxrss field of getrsage(RUSAGE_CHILDREN). It's also possible to get the maximum RSS of the current process in pure Python: >>> resource.getrusage(resource.RUSAGE_SELF).ru_maxrss 98700 It looks like Linux kernel 2.6.32 or newer is required. Hopefully, this kernel version is now quite old (December 3rd 2009). But I guess that RSS is more coarse than getting the sum of the private memory from /proc/pid/smaps (Linux 2.6.16 or newer). Sadly, it looks like the kernel only provides the maximum for RSS memory (not for private memory). Victor From victor.stinner at gmail.com Wed Aug 17 11:38:15 2016 From: victor.stinner at gmail.com (Victor Stinner) Date: Wed, 17 Aug 2016 17:38:15 +0200 Subject: [Speed] perf 0.7.3 released Message-ID: The Python perf module is a toolkit to write, run, analyze and modify benchmarks: http://perf.readthedocs.io/en/latest/ Version 0.7.3 (2016-08-17): * add a new ``slowest`` command * convert: add ``--extract-metadata=NAME`` * add ``--tracemalloc`` option: use the ``tracemalloc`` module to track Python memory allocation and get the peak of memory usage in metadata (``tracemalloc_peak``) * add ``--track-memory`` option: run a thread reading the memory usage every millisecond and store the peak as ``mem_peak`` metadata * ``compare_to``: add ``--group-by-speed`` (``-G``) and ``--min-speed`` options * metadata: add ``runnable_threads`` * Fix issues on ppc64le Power8 Victor From victor.stinner at gmail.com Wed Aug 17 11:42:12 2016 From: victor.stinner at gmail.com (Victor Stinner) Date: Wed, 17 Aug 2016 17:42:12 +0200 Subject: [Speed] Tracking memory usage In-Reply-To: References: Message-ID: Ok, I just added the mem_max_rss metadata in the development version of the perf module, just after the perf 0.7.3 release. Victor 2016-08-08 3:38 GMT+02:00 Victor Stinner : > 2016-07-30 19:48 GMT+02:00 Armin Rigo : >> Hi Victor, >> >> Fwiw, there is some per-OS (and even apparently >> per-Linux-distribution) solution mentioned here: >> http://stackoverflow.com/questions/774556/peak-memory-usage-of-a-linux-unix-process >> >> For me on Arch Linux, "/usr/bin/time -v CMD" returns a reasonable >> value in "Maximum resident set size (kbytes)". I guess that on OSes >> where this works, it gives a zero-overhead, exact answer. > > Oh, I guess that it uses the ru_maxrss field of > getrsage(RUSAGE_CHILDREN). It's also possible to get the maximum RSS > of the current process in pure Python: > >>>> resource.getrusage(resource.RUSAGE_SELF).ru_maxrss > 98700 > > It looks like Linux kernel 2.6.32 or newer is required. Hopefully, > this kernel version is now quite old (December 3rd 2009). > > But I guess that RSS is more coarse than getting the sum of the > private memory from /proc/pid/smaps (Linux 2.6.16 or newer). Sadly, it > looks like the kernel only provides the maximum for RSS memory (not > for private memory). > > Victor From victor.stinner at gmail.com Wed Aug 17 20:37:11 2016 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 18 Aug 2016 02:37:11 +0200 Subject: [Speed] New benchmark suite for Python Message-ID: Hi, After a few months of work, I created a "new" benchmark suite for Python: https://github.com/python/benchmarks It's based on: https://hg.python.org/sandbox/benchmarks_perf which is my fork the CPython benchmark suite: https://hg.python.org/benchmarks which is based on Unladen Swallow's benchmark suite (if I understood correctly). Major differences: * Use the perf module to run benchmarks in multiple processes and store results as JSON * Create virtual environments using requirements.txt to download dependencies from PyPI (rather than using old copies of libraries) * Many libraries have been upgraded: see requirements.txt The project works on Python 2 and Python 3 (I tested 2.7 and 3.6). Known regressions: * Memory tracking is broken * run_compare command is currently broken: use run (store result into a file) + compare manually * Some benchmarks have been removed: rietveld, spitfire (not on PyPI), pystone, gcbench, tuple_gc_hell * I only tested Linux, I expect issues on Windows. (I didn't try my perf module on Windows yet.) I already allowed all Python core developers to push to the GitHub project. We can create a new "benchmarks" (or "Performance" maybe?) team if we want to allow more contributors who are not core developers. PyPy, Pyston, Pyjion, Numba, etc. : Hey! it's now time to start to take a look at my project and test it ;-) Tell me what is broken, what is missing, and I will try to help you to move your project to this new benchmark suite! As requested (suggested?) by Brett Canon, the Git repository has no history, it only contains 1 commit! I'm really sorry of loosing all the history and all authors, but it allows to start with a much smaller repository: around 2 MB. The current benchmark repository is more around 200 MB! TODO: * continue to upgrade libraries in requirements.txt. I failed to upgrade Django to 1.10, it complains about a missing template engine config setting. * convert more code to the perf module, like "startup" tests * run benchmarks and analyze results ;-) * write more documentation explaining how to run reliable benchmarks * ... Victor From victor.stinner at gmail.com Wed Aug 17 21:17:33 2016 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 18 Aug 2016 03:17:33 +0200 Subject: [Speed] New benchmark suite for Python In-Reply-To: References: Message-ID: 2016-08-18 2:37 GMT+02:00 Victor Stinner : > PyPy, Pyston, Pyjion, Numba, etc. : Hey! it's now time to start to > take a look at my project and test it ;-) Tell me what is broken, what > is missing, and I will try to help you to move your project to this > new benchmark suite! I made a quick test on PyPy: the creation of the virtual environment fails when trying to compile a Cython extension to install bzr. Since bzr is not tested by the PyPy benchmark suite, I just removed bzr_startup benchmark to make my Python benchmark suite compatible with PyPy. > As requested (suggested?) by Brett Canon, the Git repository has no > history, ... Reference: https://mail.python.org/pipermail/speed/2016-July/000401.html Brett: "I say just start a new repo from scratch. (...)" Victor From zachary.ware+pydev at gmail.com Thu Aug 18 00:47:29 2016 From: zachary.ware+pydev at gmail.com (Zachary Ware) Date: Wed, 17 Aug 2016 23:47:29 -0500 Subject: [Speed] New benchmark suite for Python In-Reply-To: References: Message-ID: On Wed, Aug 17, 2016 at 7:37 PM, Victor Stinner wrote: > PyPy, Pyston, Pyjion, Numba, etc. : Hey! it's now time to start to > take a look at my project and test it ;-) Tell me what is broken, what > is missing, and I will try to help you to move your project to this > new benchmark suite! Also, if you're interested in having your interpreter benchmarked on speed.python.org, contact me with clear instructions (preferably in the form of a shell or Python script) on how to build, test, install, and invoke your interpreter from a fresh Ubuntu 16.04 installation. As an example, here's an untested version for CPython 3.x: #!/bin/sh # set up dependencies sudo apt-get build-dep -y python3 sudo apt-get install -y --no-install-recommends mercurial # get the code hg clone https://hg.python.org/cpython cd cpython # build ./configure --prefix=/opt/python/default make -j12 # test make buildbottest TESTOPTS=-j12 # install make install # invoke /opt/python/default/bin/python3 I don't know when I'll have a chance to work on it, but I'd like to get as many projects as possible benchmarked on speed.python.org. Victor: Thanks for getting the new repository set up and for all your work on the new runner! I'm looking forward to trying it out. -- Zach From arigo at tunes.org Thu Aug 18 02:48:40 2016 From: arigo at tunes.org (Armin Rigo) Date: Thu, 18 Aug 2016 08:48:40 +0200 Subject: [Speed] New benchmark suite for Python In-Reply-To: References: Message-ID: Hi Victor, On 18 August 2016 at 03:17, Victor Stinner wrote: > I made a quick test on PyPy: the creation of the virtual environment > fails when trying to compile a Cython extension to install bzr. Indeed, bzr cannot be installed on PyPy because it uses Cython in a strange way: it declares and directly pokes inside PyListObjects from a .pyx file. But note that bzr (seems to) have systematically a pure Python version of all its .pyx files. The fix might be as simple as changing setup.py to check for PyPy, and in this case do nothing in add_pyrex_extension(). According to https://answers.launchpad.net/bzr/+faq/703 it should work without any Cython code too. A bient?t, Armin. From stefan_ml at behnel.de Thu Aug 18 02:46:11 2016 From: stefan_ml at behnel.de (Stefan Behnel) Date: Thu, 18 Aug 2016 08:46:11 +0200 Subject: [Speed] New benchmark suite for Python In-Reply-To: References: Message-ID: <55558da7-fbfb-f556-4df2-a19d8c9182c3@behnel.de> Zachary Ware schrieb am 18.08.2016 um 06:47: > On Wed, Aug 17, 2016 at 7:37 PM, Victor Stinner wrote: >> PyPy, Pyston, Pyjion, Numba, etc. : Hey! it's now time to start to >> take a look at my project and test it ;-) Tell me what is broken, what >> is missing, and I will try to help you to move your project to this >> new benchmark suite! > > Also, if you're interested in having your interpreter benchmarked on > speed.python.org, contact me with clear instructions (preferably in > the form of a shell or Python script) on how to build, test, install, > and invoke your interpreter from a fresh Ubuntu 16.04 installation. > As an example, here's an untested version for CPython 3.x: > > #!/bin/sh > # set up dependencies > sudo apt-get build-dep -y python3 > sudo apt-get install -y --no-install-recommends mercurial > # get the code > hg clone https://hg.python.org/cpython > cd cpython > # build > ./configure --prefix=/opt/python/default > make -j12 > # test > make buildbottest TESTOPTS=-j12 > # install > make install > # invoke > /opt/python/default/bin/python3 > > > I don't know when I'll have a chance to work on it, but I'd like to > get as many projects as possible benchmarked on speed.python.org. Is there a repository somewhere with existing runner scripts that I could look at and send a pull request to? I saw the python/speed.python.org project on github, but that seems rather dead. > Victor: Thanks for getting the new repository set up and for all your > work on the new runner! I'm looking forward to trying it out. +1 Stefan From fijall at gmail.com Thu Aug 18 03:17:37 2016 From: fijall at gmail.com (Maciej Fijalkowski) Date: Thu, 18 Aug 2016 16:17:37 +0900 Subject: [Speed] New benchmark suite for Python In-Reply-To: References: Message-ID: Hey Victor Did you look into integrating the pypy benchmarks that we added over the years? On Thu, Aug 18, 2016 at 9:37 AM, Victor Stinner wrote: > Hi, > > After a few months of work, I created a "new" benchmark suite for Python: > > https://github.com/python/benchmarks > > It's based on: > https://hg.python.org/sandbox/benchmarks_perf > which is my fork the CPython benchmark suite: > https://hg.python.org/benchmarks > which is based on Unladen Swallow's benchmark suite (if I understood correctly). > > Major differences: > > * Use the perf module to run benchmarks in multiple processes and > store results as JSON > * Create virtual environments using requirements.txt to download > dependencies from PyPI (rather than using old copies of libraries) > * Many libraries have been upgraded: see requirements.txt > > The project works on Python 2 and Python 3 (I tested 2.7 and 3.6). > > Known regressions: > > * Memory tracking is broken > * run_compare command is currently broken: use run (store result into > a file) + compare manually > * Some benchmarks have been removed: rietveld, spitfire (not on PyPI), > pystone, gcbench, tuple_gc_hell > * I only tested Linux, I expect issues on Windows. (I didn't try my > perf module on Windows yet.) > > I already allowed all Python core developers to push to the GitHub > project. We can create a new "benchmarks" (or "Performance" maybe?) > team if we want to allow more contributors who are not core > developers. > > PyPy, Pyston, Pyjion, Numba, etc. : Hey! it's now time to start to > take a look at my project and test it ;-) Tell me what is broken, what > is missing, and I will try to help you to move your project to this > new benchmark suite! > > As requested (suggested?) by Brett Canon, the Git repository has no > history, it only contains 1 commit! I'm really sorry of loosing all > the history and all authors, but it allows to start with a much > smaller repository: around 2 MB. The current benchmark repository is > more around 200 MB! > > TODO: > > * continue to upgrade libraries in requirements.txt. I failed to > upgrade Django to 1.10, it complains about a missing template engine > config setting. > * convert more code to the perf module, like "startup" tests > * run benchmarks and analyze results ;-) > * write more documentation explaining how to run reliable benchmarks > * ... > > Victor > _______________________________________________ > Speed mailing list > Speed at python.org > https://mail.python.org/mailman/listinfo/speed From zachary.ware+pydev at gmail.com Thu Aug 18 11:40:46 2016 From: zachary.ware+pydev at gmail.com (Zachary Ware) Date: Thu, 18 Aug 2016 10:40:46 -0500 Subject: [Speed] New benchmark suite for Python In-Reply-To: <55558da7-fbfb-f556-4df2-a19d8c9182c3@behnel.de> References: <55558da7-fbfb-f556-4df2-a19d8c9182c3@behnel.de> Message-ID: On Thu, Aug 18, 2016 at 1:46 AM, Stefan Behnel wrote: > Is there a repository somewhere with existing runner scripts that I could > look at and send a pull request to? I saw the python/speed.python.org > project on github, but that seems rather dead. The whole speed.python.org project is a bit of a mess currently, my goal when getting it set up was to just get it actually working without much regard for keeping things tidy. https://github.com/python/speed.python.org was a placeholder site, speed.python.org is now serving https://github.com/zware/codespeed. The benchmark runner is currently the standard perf.py in hg.python.org/benchmarks, driven by a script very similar to the 'run_and_upload.py' in http://bugs.python.org/file41202/benchmarks.diff which lives on the benchmark runner and is not currently version-controlled. Benchmark runs are started by buildbot.python.org, see http://buildbot.python.org/all/buildslaves/speed-python (the master config is not public, that's also on my list of things to get to along with rewriting the master config to handle the GitHub move and other improvements. The commands passed to the runner can be gleaned from the buildbot logs, though). There's not really any good way to send a PR on anything but the speed.python.org site right now. -- Zach From victor.stinner at gmail.com Thu Aug 18 11:54:14 2016 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 18 Aug 2016 17:54:14 +0200 Subject: [Speed] New benchmark suite for Python In-Reply-To: References: Message-ID: 2016-08-18 6:47 GMT+02:00 Zachary Ware : > Also, if you're interested in having your interpreter benchmarked on > speed.python.org, contact me with clear instructions (preferably in > the form of a shell or Python script) on how to build, test, install, > and invoke your interpreter from a fresh Ubuntu 16.04 installation. Cool. I would prefer to wait a few days until people have the opportunity of test the new benchmark suite on their computer and one different platforms. I'm not sure that it's stable yet. For example, I just modified the telco benchmark to use I/O in memory, rather than using the filesystem (disk). Victor From victor.stinner at gmail.com Thu Aug 18 11:55:44 2016 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 18 Aug 2016 17:55:44 +0200 Subject: [Speed] New benchmark suite for Python In-Reply-To: References: Message-ID: 2016-08-18 8:48 GMT+02:00 Armin Rigo : > Indeed, bzr cannot be installed on PyPy because it uses Cython in a > strange way: it declares and directly pokes inside PyListObjects from > a .pyx file. But note that bzr (seems to) have systematically a pure > Python version of all its .pyx files. (...) bazar is only used for a "startup" benchmark. I don't think that such benchmark is very interesting... I would prefer to see a benchmark on a less dummy operation on the repository than displaying the help... Victor From victor.stinner at gmail.com Thu Aug 18 11:57:04 2016 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 18 Aug 2016 17:57:04 +0200 Subject: [Speed] New benchmark suite for Python In-Reply-To: References: Message-ID: Hi Maciej, 2016-08-18 9:17 GMT+02:00 Maciej Fijalkowski : > Did you look into integrating the pypy benchmarks that we added over the years? Not yet, but yes, I plan to collect benchmarks from PyPy, Pyston, Pyjion, etc. Later we can discuss if some benchmarks should be disabled in the default set of benchmarks. Victor From victor.stinner at gmail.com Thu Aug 18 12:04:02 2016 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 18 Aug 2016 18:04:02 +0200 Subject: [Speed] New benchmark suite for Python In-Reply-To: References: Message-ID: 2016-08-18 3:17 GMT+02:00 Victor Stinner : > I made a quick test on PyPy: (...) I ran a longer test last night. Some benchmarks were slower on PyPy. In fact, the benchmark doesn't give enough time to PyPy to warm up its JIT. I just released perf 0.7.4 which has a better calibration code for Python using a JIT (PyPy), better default configuration values: Default (with a JIT, ex: PyPy): 6 processes, 10 samples per process (total: 60 samples), and 10 warmups. vs Default (no JIT, ex: CPython): 20 processes, 3 samples per process (total: 60 samples), and 1 warmup. perf 0.7.4 also has new helper functions: python_implementation() and python_has_jit(). I started to patch benchmarks to increase the number of warmups (even more) of some benchmarks: * go: 50 warmups * hexiom2: 15 warmups * tornado http: 30 warmups If perf detects a JIT (PyPy), the warmup step now computes more samples dynamically if it detects that a raw sample is smaller than the minimum time (100 ms). All these changes were written to help PyPy to warm up its JIT. I'm not sure that it's fully correct, it may make benchmarks less reliable since the number of warmup samples is no more constant. Maybe the code should be enhanced even more to at least use the same parameters in all worker processes. Victor From brett at python.org Thu Aug 18 14:55:31 2016 From: brett at python.org (Brett Cannon) Date: Thu, 18 Aug 2016 18:55:31 +0000 Subject: [Speed] New benchmark suite for Python In-Reply-To: References: Message-ID: On Wed, 17 Aug 2016 at 17:37 Victor Stinner wrote: > Hi, > > After a few months of work, I created a "new" benchmark suite for Python: > > https://github.com/python/benchmarks > [SNIP] > Yay! > > Known regressions: > > * Memory tracking is broken > * run_compare command is currently broken: use run (store result into > a file) + compare manually > * Some benchmarks have been removed: rietveld, spitfire (not on PyPI), > pystone, gcbench, tuple_gc_hell > * I only tested Linux, I expect issues on Windows. (I didn't try my > perf module on Windows yet.) > > I already allowed all Python core developers to push to the GitHub > project. We can create a new "benchmarks" (or "Performance" maybe?) > team if we want to allow more contributors who are not core > developers. > > PyPy, Pyston, Pyjion, Numba, etc. : Hey! it's now time to start to > take a look at my project and test it ;-) Tell me what is broken, what > is missing, and I will try to help you to move your project to this > new benchmark suite! > > As requested (suggested?) by Brett Cannon, the Git repository has no > history, it only contains 1 commit! I'm really sorry of losing all > the history and all authors, but it allows to start with a much > smaller repository: around 2 MB. The current benchmark repository is > more around 200 MB! > If people care then they can look at hg.python.org/benchmarks > > TODO: > > * continue to upgrade libraries in requirements.txt. I failed to > upgrade Django to 1.10, it complains about a missing template engine > config setting. > * convert more code to the perf module, like "startup" tests > * run benchmarks and analyze results ;-) > * write more documentation explaining how to run reliable benchmarks > * ... > I'll try to get around and run the benchmarks on Windows to see if any issues come up. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Fri Aug 19 07:20:19 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 19 Aug 2016 21:20:19 +1000 Subject: [Speed] New benchmark suite for Python In-Reply-To: References: Message-ID: On 19 August 2016 at 01:55, Victor Stinner wrote: > 2016-08-18 8:48 GMT+02:00 Armin Rigo : >> Indeed, bzr cannot be installed on PyPy because it uses Cython in a >> strange way: it declares and directly pokes inside PyListObjects from >> a .pyx file. But note that bzr (seems to) have systematically a pure >> Python version of all its .pyx files. (...) > > bazar is only used for a "startup" benchmark. I don't think that such > benchmark is very interesting... I would prefer to see a benchmark on > a less dummy operation on the repository than displaying the help... Simple commands like displaying help messages are where interpreter startup time dominates the end user experience for applications written in Python, though. For example, improvements to import system performance tend to mostly show up there - for longer running benchmarks, changes in startup time tend to get swamped by the actual runtime speed, while the baseline "python -c 'pass'" mainly varies based on how many modules we're implicitly importing at startup rather than how well the import system is performing . Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From fijall at gmail.com Fri Aug 19 12:47:49 2016 From: fijall at gmail.com (Maciej Fijalkowski) Date: Fri, 19 Aug 2016 18:47:49 +0200 Subject: [Speed] New benchmark suite for Python In-Reply-To: References: Message-ID: On Fri, Aug 19, 2016 at 1:20 PM, Nick Coghlan wrote: > On 19 August 2016 at 01:55, Victor Stinner wrote: >> 2016-08-18 8:48 GMT+02:00 Armin Rigo : >>> Indeed, bzr cannot be installed on PyPy because it uses Cython in a >>> strange way: it declares and directly pokes inside PyListObjects from >>> a .pyx file. But note that bzr (seems to) have systematically a pure >>> Python version of all its .pyx files. (...) >> >> bazar is only used for a "startup" benchmark. I don't think that such >> benchmark is very interesting... I would prefer to see a benchmark on >> a less dummy operation on the repository than displaying the help... > > Simple commands like displaying help messages are where interpreter > startup time dominates the end user experience for applications > written in Python, though. For example, improvements to import system > performance tend to mostly show up there - for longer running > benchmarks, changes in startup time tend to get swamped by the actual > runtime speed, while the baseline "python -c 'pass'" mainly varies > based on how many modules we're implicitly importing at startup rather > than how well the import system is performing . > > Cheers, > Nick. I would still argue that displaying help is not a very good benchmark :-) From alex.gaynor at gmail.com Fri Aug 19 12:48:44 2016 From: alex.gaynor at gmail.com (Alex Gaynor) Date: Fri, 19 Aug 2016 12:48:44 -0400 Subject: [Speed] New benchmark suite for Python In-Reply-To: References: Message-ID: It's probably an ok benchmark of warmup. Alex On Fri, Aug 19, 2016 at 12:47 PM, Maciej Fijalkowski wrote: > On Fri, Aug 19, 2016 at 1:20 PM, Nick Coghlan wrote: > > On 19 August 2016 at 01:55, Victor Stinner > wrote: > >> 2016-08-18 8:48 GMT+02:00 Armin Rigo : > >>> Indeed, bzr cannot be installed on PyPy because it uses Cython in a > >>> strange way: it declares and directly pokes inside PyListObjects from > >>> a .pyx file. But note that bzr (seems to) have systematically a pure > >>> Python version of all its .pyx files. (...) > >> > >> bazar is only used for a "startup" benchmark. I don't think that such > >> benchmark is very interesting... I would prefer to see a benchmark on > >> a less dummy operation on the repository than displaying the help... > > > > Simple commands like displaying help messages are where interpreter > > startup time dominates the end user experience for applications > > written in Python, though. For example, improvements to import system > > performance tend to mostly show up there - for longer running > > benchmarks, changes in startup time tend to get swamped by the actual > > runtime speed, while the baseline "python -c 'pass'" mainly varies > > based on how many modules we're implicitly importing at startup rather > > than how well the import system is performing . > > > > Cheers, > > Nick. > > I would still argue that displaying help is not a very good benchmark :-) > _______________________________________________ > Speed mailing list > Speed at python.org > https://mail.python.org/mailman/listinfo/speed > -- "I disapprove of what you say, but I will defend to the death your right to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) "The people's good is the highest law." -- Cicero GPG Key fingerprint: D1B3 ADC0 E023 8CA6 -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Fri Aug 19 12:50:48 2016 From: fijall at gmail.com (Maciej Fijalkowski) Date: Fri, 19 Aug 2016 18:50:48 +0200 Subject: [Speed] New benchmark suite for Python In-Reply-To: References: Message-ID: Very likely just pyc import time On Fri, Aug 19, 2016 at 6:48 PM, Alex Gaynor wrote: > It's probably an ok benchmark of warmup. > > Alex > > On Fri, Aug 19, 2016 at 12:47 PM, Maciej Fijalkowski > wrote: >> >> On Fri, Aug 19, 2016 at 1:20 PM, Nick Coghlan wrote: >> > On 19 August 2016 at 01:55, Victor Stinner >> > wrote: >> >> 2016-08-18 8:48 GMT+02:00 Armin Rigo : >> >>> Indeed, bzr cannot be installed on PyPy because it uses Cython in a >> >>> strange way: it declares and directly pokes inside PyListObjects from >> >>> a .pyx file. But note that bzr (seems to) have systematically a pure >> >>> Python version of all its .pyx files. (...) >> >> >> >> bazar is only used for a "startup" benchmark. I don't think that such >> >> benchmark is very interesting... I would prefer to see a benchmark on >> >> a less dummy operation on the repository than displaying the help... >> > >> > Simple commands like displaying help messages are where interpreter >> > startup time dominates the end user experience for applications >> > written in Python, though. For example, improvements to import system >> > performance tend to mostly show up there - for longer running >> > benchmarks, changes in startup time tend to get swamped by the actual >> > runtime speed, while the baseline "python -c 'pass'" mainly varies >> > based on how many modules we're implicitly importing at startup rather >> > than how well the import system is performing . >> > >> > Cheers, >> > Nick. >> >> I would still argue that displaying help is not a very good benchmark :-) >> _______________________________________________ >> Speed mailing list >> Speed at python.org >> https://mail.python.org/mailman/listinfo/speed > > > > > -- > "I disapprove of what you say, but I will defend to the death your right to > say it." -- Evelyn Beatrice Hall (summarizing Voltaire) > "The people's good is the highest law." -- Cicero > GPG Key fingerprint: D1B3 ADC0 E023 8CA6 > From ncoghlan at gmail.com Sun Aug 21 01:38:40 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 21 Aug 2016 15:38:40 +1000 Subject: [Speed] New benchmark suite for Python In-Reply-To: References: Message-ID: On 20 August 2016 at 02:50, Maciej Fijalkowski wrote: > Very likely just pyc import time As one of the import system maintainers, that's a number I consider quite interesting and worth benchmarking :) It's also one of the key numbers for Linux distro Python usage, since it impacts how responsive the system shell feels to developers and administrators - an end user can't readily tell the difference between "this shell is slow" and "this particular command I am running is using a language interpreter with a long startup time", but an interpreter benchmark suite can. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From fijall at gmail.com Sun Aug 21 05:02:14 2016 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sun, 21 Aug 2016 11:02:14 +0200 Subject: [Speed] New benchmark suite for Python In-Reply-To: References: Message-ID: On Sun, Aug 21, 2016 at 7:38 AM, Nick Coghlan wrote: > On 20 August 2016 at 02:50, Maciej Fijalkowski wrote: >> Very likely just pyc import time > > As one of the import system maintainers, that's a number I consider > quite interesting and worth benchmarking :) > > It's also one of the key numbers for Linux distro Python usage, since > it impacts how responsive the system shell feels to developers and > administrators - an end user can't readily tell the difference between > "this shell is slow" and "this particular command I am running is > using a language interpreter with a long startup time", but an > interpreter benchmark suite can. > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia Fair point, let's have such a benchmark. Let's not have it called "bzr" though because it gives the wrong impression. The same way unladen swallow added a benchmark and called it "django" while not representing django very well. That said, likely not very many people use bzr, but still, would be good if it's called bzr-pyc or simpler - have a benchmark that imports a whole bunch of pyc from a big project (e.g. pypy :-) Cheers, fijal From ncoghlan at gmail.com Sun Aug 21 12:42:05 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 22 Aug 2016 02:42:05 +1000 Subject: [Speed] New benchmark suite for Python In-Reply-To: References: Message-ID: On 21 August 2016 at 19:02, Maciej Fijalkowski wrote: > Let's not have it called "bzr" though because it gives the wrong > impression. The same way unladen swallow added a benchmark and called > it "django" while not representing django very well. That said, likely > not very many people use bzr, but still, would be good if it's called > bzr-pyc or simpler - have a benchmark that imports a whole bunch of > pyc from a big project (e.g. pypy :-) Yeah, I assume the use of bzr for this purpose is mainly an accident of history - adopting a non-trivial cross-platform Python command line application, rather than putting together a synthetic benchmark like Tools/importbench that may not be particular representative of real workloads (which can use techniques like lazy module imports to defer startup costs until those features are actually needed). Perhaps call the benchmark "bzr-startup" to emphasize the aim is to measure how long it takes bzr to start in general, moreso than how long it takes to print the help message? Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From victor.stinner at gmail.com Sun Aug 21 13:36:02 2016 From: victor.stinner at gmail.com (Victor Stinner) Date: Sun, 21 Aug 2016 19:36:02 +0200 Subject: [Speed] New benchmark suite for Python In-Reply-To: References: Message-ID: Le 21 ao?t 2016 11:02 AM, "Maciej Fijalkowski" a ?crit : > Let's not have it called "bzr" though because it gives the wrong > impression. The benchmark was called "bzr_startup", bit I agree that "django" mus be renamed to "django_template". There is an HTTP benchmark, tornado_http, using the local link (server and client run in the same process if I recall correctly). Victor -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Mon Aug 22 06:42:27 2016 From: victor.stinner at gmail.com (Victor Stinner) Date: Mon, 22 Aug 2016 12:42:27 +0200 Subject: [Speed] New benchmark suite for Python In-Reply-To: References: Message-ID: Done: I renamed "django" benchmark" to "django_template": https://github.com/python/benchmarks/commit/d674a99e3a9a10a29c44349b2916740680e936c8 Victor 2016-08-21 19:36 GMT+02:00 Victor Stinner : > Le 21 ao?t 2016 11:02 AM, "Maciej Fijalkowski" a ?crit : >> Let's not have it called "bzr" though because it gives the wrong >> impression. > > The benchmark was called "bzr_startup", bit I agree that "django" mus be > renamed to "django_template". There is an HTTP benchmark, tornado_http, > using the local link (server and client run in the same process if I recall > correctly). > > Victor From victor.stinner at gmail.com Wed Aug 24 11:38:13 2016 From: victor.stinner at gmail.com (Victor Stinner) Date: Wed, 24 Aug 2016 17:38:13 +0200 Subject: [Speed] performance 0.1 (and 0.1.1) release Message-ID: Hi, I released a first version of the Python benchmark suite. (Quickly followed by a 0.1.1 bugfix ;-)) It is now possible to install it using pip: python3 -m pip install performance And run it using: pyperformance run --python=python2 --rigorous -b all -o py2.json pyperformance run --python=python3 --rigorous -b all -o py3.json pyperformance compare py2.json py3.json Note: the "python3 -m performance ..." syntax works too. It creates virtual environments in ./venv/ subdirectory. (I may add an option to choose where to create them.) performance 0.1.1 version works well on Linux with CPython. There are some know issues on Windows: https://github.com/python/benchmarks/issues/5 I don't consider the PyPy support as stable yet. I used the "performance" name on PyPI, because "benchmark" and "benchmarks" are already reserved. The Python module is also named "performance" and comes with "pyperformance" script. I made a suble bugfix: requirements.txt now uses fixed version rather than ">=min_version". For example, "perf>=0.7.4" became "perf==0.7.4". I expect to get more reproductible benchmark results with fixed versions. Before a release, we should not forget to update dependencies to test the most recent versions of Python modules and applications. Now the development version always install performance 0.1.1 (see performance/requirements.txt). I should fix this to install the development version of performance/ when it is run from the source code (when setup.py is available in the parent directory?). Victor From victor.stinner at gmail.com Wed Aug 24 12:01:27 2016 From: victor.stinner at gmail.com (Victor Stinner) Date: Wed, 24 Aug 2016 18:01:27 +0200 Subject: [Speed] performance 0.1 (and 0.1.1) release In-Reply-To: References: Message-ID: 2016-08-24 17:38 GMT+02:00 Victor Stinner : > Now the development version always install performance 0.1.1 (see > performance/requirements.txt). I should fix this to install the > development version of performance/ when it is run from the source > code (when setup.py is available in the parent directory?). FYI it's now fixed: the development version of the benchmark suite now installs the performance module using "pip install -e " in development mode, but "pip install performance==x.y.z" in release mode. Victor From victor.stinner at gmail.com Thu Aug 25 18:07:53 2016 From: victor.stinner at gmail.com (Victor Stinner) Date: Fri, 26 Aug 2016 00:07:53 +0200 Subject: [Speed] Rename python/benchmarks GitHub project to python/performance? Message-ID: Hi, For the first release of the "new" benchmark suite, I chose the name "performance", since "benchmark" and "benchmarks" names were already reserved on PyPI. It's the name of the Python module, but also of the command line tool: "pyperformance". Since there is an "old" benchmark suite (https://hg.python.org/benchmarks), PyPy has its benchmark suite, etc. I propose to rename the GitHub project to "performance" to avoid confusion. What do you think? Note: I'm not a big fan of the "performance" name, but I don't think it matters much. The name only needs to be unique and available on PyPI :-D By the way, I don't know if it's worth it to have a "pyperformance" command line tool. You can already use "python3 -m performance ..." syntax. But you have to recall the Python version used to install the module. "python2 -m performance ..." doesn't work if you only installed performance for Python 3! Victor From solipsis at pitrou.net Fri Aug 26 04:26:07 2016 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 26 Aug 2016 10:26:07 +0200 Subject: [Speed] Rename python/benchmarks GitHub project to python/performance? References: Message-ID: <20160826102607.297442b3@fsol> On Fri, 26 Aug 2016 00:07:53 +0200 Victor Stinner wrote: > > By the way, I don't know if it's worth it to have a "pyperformance" > command line tool. You can already use "python3 -m performance ..." > syntax. But you have to recall the Python version used to install the > module. "python2 -m performance ..." doesn't work if you only > installed performance for Python 3! Also, you may have several Python 3s installed (the system 3.4, a custom 3.4, a custom 3.5, a custom 3.6...) so a CLI script is much easier to use. Regards Antoine. From victor.stinner at gmail.com Fri Aug 26 05:32:48 2016 From: victor.stinner at gmail.com (Victor Stinner) Date: Fri, 26 Aug 2016 11:32:48 +0200 Subject: [Speed] Rename python/benchmarks GitHub project to python/performance? In-Reply-To: <20160826102607.297442b3@fsol> References: <20160826102607.297442b3@fsol> Message-ID: Le vendredi 26 ao?t 2016, Antoine Pitrou a ?crit : > On Fri, 26 Aug 2016 00:07:53 +0200 > Victor Stinner > > wrote: > > > > By the way, I don't know if it's worth it to have a "pyperformance" > > command line tool. You can already use "python3 -m performance ..." > > syntax. But you have to recall the Python version used to install the > > module. "python2 -m performance ..." doesn't work if you only > > installed performance for Python 3! > > Also, you may have several Python 3s installed (the system 3.4, a > custom 3.4, a custom 3.5, a custom 3.6...) so a CLI script is much > easier to use. Yeah right. Thanks for helping me to take a decision on that. For example, I don't want to install performance in PyPy system directory. FYI performance _is_ installed for each tested Python, but in a dedicated virtual environment which is isolated from the system to get a more reliable testing environment. For example, the number of .pth files installed on the system has an impact on startup time. Having a controlled venv avoids the random number of .pth files. Victor -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Fri Aug 26 13:06:19 2016 From: brett at python.org (Brett Cannon) Date: Fri, 26 Aug 2016 17:06:19 +0000 Subject: [Speed] Rename python/benchmarks GitHub project to python/performance? In-Reply-To: References: Message-ID: On Thu, 25 Aug 2016 at 15:08 Victor Stinner wrote: > Hi, > > For the first release of the "new" benchmark suite, I chose the name > "performance", since "benchmark" and "benchmarks" names were already > reserved on PyPI. It's the name of the Python module, but also of the > command line tool: "pyperformance". > > Since there is an "old" benchmark suite > (https://hg.python.org/benchmarks), PyPy has its benchmark suite, etc. > I propose to rename the GitHub project to "performance" to avoid > confusion. > > What do you think? > If you want to do then go ahead, but I don't think it will be a big issue in the grand scheme of things. > > Note: I'm not a big fan of the "performance" name, but I don't think > it matters much. The name only needs to be unique and available on > PyPI :-D > > By the way, I don't know if it's worth it to have a "pyperformance" > command line tool. You can already use "python3 -m performance ..." > syntax. But you have to recall the Python version used to install the > module. "python2 -m performance ..." doesn't work if you only > installed performance for Python 3! > As Antoine pointed out, if it doesn't matter what interpreter has the script installed to run the benchmarks against another interpreter than a script makes sense (but do keep it available through -m). -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Fri Aug 26 20:08:56 2016 From: victor.stinner at gmail.com (Victor Stinner) Date: Sat, 27 Aug 2016 02:08:56 +0200 Subject: [Speed] performance 0.1 (and 0.1.1) release In-Reply-To: References: Message-ID: Release early, release often: performance 0.1.2 has been released! The first version supporting Windows. I renamed the GitHub project from python/benchmarks to python/performance. All changes: * Windows is now supported * Add a new ``venv`` command to show, create, recrete or remove the virtual environment. * Fix pybench benchmark (update to perf 0.7.4 API) * performance now tries to install the ``psutil`` module on CPython for better system metrics in metadata and CPU pinning on Python 2. * The creation of the virtual environment now also tries ``virtualenv`` and ``venv`` Python modules, not only the virtualenv command. * The development version of performance now installs performance with "pip install -e " * The GitHub project was renamed from ``python/benchmarks`` to ``python/performance``. Victor