[Speed] Should we change what benchmarks we have?
Yury Selivanov
yselivanov.ml at gmail.com
Thu Feb 11 18:16:23 EST 2016
On 2016-02-11 6:06 PM, Antoine Pitrou wrote:
> On Thu, 11 Feb 2016 17:50:05 -0500
> Yury Selivanov <yselivanov.ml at gmail.com>
> wrote:
>> Right now, spectral_norm is 50% faster on python 2 (when compared to 3.5).
> spectral_norm is really a horrid benchmark.
>
>> Yes, spectral_norm is micro-benchmark, but still, there is a lot of
>> python code out there that does some calculation in pure Python not
>> involving numpy or pypy.
> Can you clarify "a lot"?
Any code that occasionally uses "int [op] int" code. That code becomes
faster (especially if it's small ints). In tight loops significantly
faster (that's what spectral_norm is doing).
Look at the pillow package, for instance [1] -- just one of the first
packages I thought of -- something non-scientific that happens to do
some calculations here and there.
Unless 21955 makes numpy code slower, I'm not sure why we're discussing
this.
Yury
[1] https://github.com/python-pillow/Pillow
More information about the Speed
mailing list