finding out the precision of floats

Arnaud Delobelle arnodel at googlemail.com
Sun Feb 25 07:06:38 EST 2007


On Feb 25, 11:20 am, "John Machin" <sjmac... at lexicon.net> wrote:
[...]
> I'm a little puzzled:
>
> You don't seem to want a function that will tell you the actual number
> of significant decimal digits in a particular number e.g.
>
> nsig(12300.0) -> 3
> nsig(0.00123400) -> 4
> etc
>
> You appear to be trying to determine what is the maximum number of
> significant decimal digits afforded by the platform's implementation
> of Python's float type.

Yes you are correct.

> Is Python implemented on a platform that
> *doesn't* use IEEE 754 64-bit FP as the in-memory format for floats?

I had no knowledge of IEEE 754 64-bit FP.  The python doc says that
floats are implemented using the C 'double' data type but I didn't
realise there was a standard for this accross platforms .

Thanks for clarifying this. As my question shows I am not versed in
floating point arithmetic!

Looking at the definition of IEEE 754, the mantissa is made of 53
significant binary digits, which means
53*log10(2) = 15.954589770191003 significant decimal digits
(I got 16 with my previous dodgy calculation).

Does it mean it is safe to assume that this would hold on any
platform?

--
Arnaud




More information about the Python-list mailing list