Is it safe to assume floats always have a 53-bit mantissa?

Steven D'Aprano steve at pearwood.info
Wed Dec 30 08:18:32 EST 2015


We know that Python floats are equivalent to C doubles, which are 64-bit
IEEE-754 floating point numbers.

Well, actually, C doubles are not strictly defined. The only promise the C
standard makes is that double is no smaller than float. (That's C float,
not Python float.) And of course, not all Python implementations use C.

Nevertheless, it's well known (in the sense that "everybody knows") that
Python floats are equivalent to C 64-bit IEEE-754 doubles. How safe is that
assumption?

I have a function with two implementations: a fast implementation that
converts an int to a float, does some processing, then converts it back to
int. That works fine so long as the int can be represented exactly as a
float.

The other implementation uses integer maths only, and is much slower but
exact.

As an optimization, I want to write:


def func(n):
    if n <= 2**53:
        # use the floating point fast implementation
    else:
        # fall back on the slower, but exact, int algorithm


(The optimization makes a real difference: for large n, the float version is
about 500 times faster.)

But I wonder whether I need to write this instead?

def func(n):
    if n <= 2**sys.float_info.mant_dig:
        # ...float
    else:
        # ...int


I don't suppose it really makes any difference performance-wise, but I can't
help but wonder if it is really necessary. If sys.float_info.mant_dig is
guaranteed to always be 53, why not just write 53?



-- 
Steven




More information about the Python-list mailing list