From matti.picus at gmail.com Mon Apr 2 23:37:29 2018 From: matti.picus at gmail.com (Matti Picus) Date: Tue, 3 Apr 2018 06:37:29 +0300 Subject: [Speed] Progress - I have run and uploaded PyPy benchmarks Message-ID: <35136c8a-d376-4278-366d-3354e566f306@gmail.com> I have run and uploaded PyPy benchmarks. I am not to sure I have done it correctly, here is what I did https://gist.github.com/mattip/e99d5a3873df4460daffba0b74c8e23e\ For some reason, I needed to go to the admin site https://speed.python.org/admin/codespeed/report/ and add a report for the run. The results show up on https://speed.python.org/comparison/, better viewed in horizontal mode (should this be the default?) and by reducing the number of executables. They do not show up on the home page, changes, nor timeline tabs. Any suggestions? Matti From matthew at woodcraft.me.uk Sun Apr 22 15:00:59 2018 From: matthew at woodcraft.me.uk (Matthew Woodcraft) Date: Sun, 22 Apr 2018 20:00:59 +0100 Subject: [Speed] Measure of Python performance for general-purpose code Message-ID: <1524423602.6yvtu2b1ql.astroid@golux.woodcraft.me.uk> I'm doing some Python speed testing, measuring the effect of different combinations of compiler flags on a small range of hardware. So far, for the test load I've mostly been using a specific program I happen to care about. But I'm thinking of writing up the results for more general interest, so I've been looking at pyperformance. To get comprehensible results, I think I really need to summarise the speed of a particular build+hardware combination as a single number, representing Python's performance for "general purpose code". So does anyone have any recommendations on what the best figure to extract from pyperformance results would be? Is pyperformance's 'default' benchmark group the most suitable for this? Is there any more sensible way to get a single number than taking the geometric mean of what Benchmark.mean() gives me for each test in the group? Are pyperformance's other default settings suitable for this purpose? -M- From ncoghlan at gmail.com Tue Apr 24 10:32:19 2018 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 25 Apr 2018 00:32:19 +1000 Subject: [Speed] Measure of Python performance for general-purpose code In-Reply-To: <1524423602.6yvtu2b1ql.astroid@golux.woodcraft.me.uk> References: <1524423602.6yvtu2b1ql.astroid@golux.woodcraft.me.uk> Message-ID: On 23 April 2018 at 05:00, Matthew Woodcraft wrote: > To get comprehensible results, I think I really need to summarise the > speed of a particular build+hardware combination as a single number, > representing Python's performance for "general purpose code". > > So does anyone have any recommendations on what the best figure to > extract from pyperformance results would be? There's no such number in the general case, since the way different aspects should be weighted differs significantly based on your use case (e.g. a long running server or GUI application may care very little about startup time, while it's critical for command line application responsiveness). That's why we have a benchmark suite, rather than just a single benchmark. https://hackernoon.com/which-is-the-fastest-version-of-python-2ae7c61a6b2b is an example of going through and calling out specific benchmarks based on the kind of code they best represent. So I don't think you're going to be able to get away from coming up with your own custom scheme that emphasises a particular usage profile. While the simplest approach is the one the linked article took (i.e. weight one benchmark at a time at 100%, ignore the others), searching for "combining multiple benchmark results into an aggregate score" returned https://pubsonline.informs.org/doi/pdf/10.1287/ited.2013.0124 as the first link for me, and based on skimming the abstract and introduction, I think it's likely to be quite relevant to your question. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia