How to make Python run as fast (or faster) than Julia

Steven D'Aprano steve+comp.lang.python at pearwood.info
Fri Feb 23 11:39:02 EST 2018


On Fri, 23 Feb 2018 23:41:44 +1100, Chris Angelico wrote:

[...]
>>  Integer pixel values
> 
> Maybe in 64 bits for the time being, but 32 certainly won't be enough.
> As soon as you do any sort of high DPI image manipulation, you will
> exceed 2**32 total pixels in an image (that's just 65536x65536, or
> 46341x46341 if you're using signed integers); 

I don't think there's any sensible reason to use signed ints for pixel 
offsets.


> and it wouldn't surprise
> me if some image manipulation needs that many on a single side - if not
> today, then tomorrow. So 64 bits might not be enough once you start
> counting total pixels.

64-bit int will limit your image size to a maximum of 4294967296 x 
4294967296.

If you stitched the images from, let's say, the Hubble Space Telescope 
together into one panoramic image of the entire sky, you'd surely exceed 
that by a couple of orders of magnitude. There's a lot of sky and the 
resolution of the Hubble is very fine.

Or made a collage of all the duck-face photos on Facebook *wink*

But you wouldn't be able to open such a file on your computer. Not until 
we get a 128-bit OS :-)

While all of this is very interesting, let's remember that listing all 
the many things which can be counted in 64 bits (or 128 bits, or 1024 
bits...) doesn't disprove the need for bignums. The argument Bart makes 
is to list all the things that don't need refrigerating:

- tinned beans
- nuts and bolts
- DVDs
- books
- spoons

as evidence that we don't need refrigerators. Yeah, I get it, there are 
approximately a hundred thousand million billion trillion things that 
don't need bignums. There are so many things that don't need bignums, you 
need a bignum to count them all!

I'm truly happy for Bart that he never, or almost never, uses numbers 
larger than 2**64. I just wish he would be less of a prig about this and 
stop trying to deny me my bignums.

One of the things I like about Python is that I stopped needing to worry 
about integer overflow back around Python 2.2 when the default integer 
type became a bignum. I used to write code like:

    try:
        calculation(n)
    except OverflowError:
        calculation(long(n))

all the time back then. Now, if I have an integer calculation, I just 
calculate it without having to care whether it fits in 64 bits or not.

And I don't give a rat's arse that this means that adding 1 to 10 to get 
11 in Python takes thirty nanoseconds instead of ten nanoseconds as a 
consequence. If I cared about that, I wouldn't be using Python.

Just the other day I need to calculate 23! (factorial) for a probability 
calculation, and Python made it trivially easy. I didn't need to import a 
special library, or use special syntax to say "use a bignum". I just 
multiplied 1 through 23.

Why, just now, on a whim, simply because I can, I calculated 100!! 
(double-factorial):

py> reduce(operator.mul, range(100, 0, -2), 1)
34243224702511976248246432895208185975118675053719198827915654463488000000000000

(Yes, I'm a maths geek, and I love this sort of thing. So sue me.)

There are dozens of languages that have made the design choice to limit 
their default integers to 16- 32- or 64-bit fixed size, and let the user 
worry about overflow. Bart, why does it upset you so that Python made a 
different choice?


-- 
Steve




More information about the Python-list mailing list