Python2 distutils question: how to best "autoconfig"...?
Tim Peters
tim.one at home.com
Mon Dec 11 14:42:48 EST 2000
[posted & mailed]
[Alex Martelli]
> An issue came up in the GMPY project that looks like it
> would need to be solved through some kind of 'automatic
> configuration' -- compile a small auxiliary C program
> (under exactly the same flags/options/whatever as those
> Python has been compiled on, on this installation), run
> it, check the results, and determine compilation flags
> accordingly.
>
> The specific issue is "how many bits of significance
> does a Python float have". (Or is there somewhere in
> the Python headers a #define for this that I missed...?)
You can use DBL_MANT_DIG from the platform's float.h. Try hard to ignore
FLT_RADIX <wink>.
http://www.cwi.nl/ftp/steven/enquire/enquire.html
is a very readable account of how to compute these things in C, but I've
seen optimizers that aren't fooled by his favorite trick (hiding everything
in function calls).
> I first tried a rather naive algorithm (halving a
> number that starts at 1.0, and keep adding it to 1.0
> until the result equals 1.0), which, on Visual C++,
> gives the expected result -- 53 bits of precision.
>
> People using gcc (on Intel hardware) meanwhile seemed
> to be measuring by this same algorithm '64 bits' of
> precision -- which aren't really there for _most_
> float computations... apparently gcc is sometimes
> using the 80-bit "temporary" (extended precision)
> format, which does have 64 bits of precision... but
> most places it's down to 53, of course.
VC may do the same -- it's unpredictable. If you write the algorithm in
*Python*, though, no optimizer known to human-kind is smart enough to keep
your temps in Pentium extended registers across being stored into PyFloat
objects. For example, I use this:
# Figure out the number of bits of precision in a float.
# Assumes some binary format is used, that "0.5" is converted
# exactly, and that multiplication by 0.5 is exact.
# Should not be affected by rounding mode.
def _compute_floating_precision():
FLOATING_PRECISION = 1
d = 0.5
assert d == 1.0 / 2.0
while (1.0 + d) - 1.0 == d:
FLOATING_PRECISION = FLOATING_PRECISION + 1
d = d * 0.5
# stop infinite loop; expect no more than 56 or so; IEEE-754
# double stops at 53; a 128-bit double may go up to 113 or so
if FLOATING_PRECISION > 1000:
raise SystemError("machine arithmetic is mondo bizarre")
return FLOATING_PRECISION
In the equivalent C, I once caught a MetroWerks compiler "optimizing"
(1.0+d)-1.0 to d, and then d==d to 1. A better (but still not guaranteed)
approach in C is to run doubles thru frexp, reconstructing their values from
frexp's outputs via ldexp. That's pretty effective at frustrating
optimizers into doing what you wanted <0.6 wink>.
> This matters (most particularly) when I'm trying to
> get the 'heuristically best' rational number to
> match a given floating-point number. If I know the
> real precision of the float is 53 or thereabouts, I
> can get very nice results through a Stern-Brocot
> tree (as suggested by Pearu Peterson) -- the resulting
> rational number equals the 'exact fraction' that was
> used for all 'small' numbers i and j, when I turn
> float(i)/j into a rational (gmpy.mpq). But if I
> run under the misleading impression that the float
> has 64 bits of precision, well, I end up with a huge,
> 'unreadable' rational-number.
Alternative: by hook or by crook, cut your double x back to storage
precision (see above). Say two successive convergents are i/j and i'/j'.
Stop, and return i/j, when abs(double(i)/j - x) <= abs(double(i')/j' - x),
i.e. stop when the convergents stop improving in double arithmetic.
more-than-one-way-to-avoid-a-config-ly y'rs - tim
More information about the Python-list
mailing list