What is precision of a number representation?

Antoon Pardon antoon.pardon at rece.vub.ac.be
Tue Jul 12 07:02:38 EDT 2016


Op 12-07-16 om 12:27 schreef Marko Rauhamaa:
> Antoon Pardon <antoon.pardon at rece.vub.ac.be>:
>
>> Op 12-07-16 om 06:19 schreef Steven D'Aprano:
>>> How do you represent 1 mm to a precision of four significant digits,
>>> in such a way that it is distinguished from 1 mm to one significant
>>> digit, and 1 mm to a precision of four decimal places?
>> A precision is an indication of a fault tolerance. You don't indicate
>> less fault tolerace by writing it as 0001.
>
>> Please explain how 0001 represants a difference in precision than just
>> 1.
> "Precision" here is not ordinary usage. Historically, it comes from the
> printf(3) library function:

Steven was talking about significant digits. So my use of precision
was in that context. So I think your printf here is irrelevant.

Talking about how your number is printed is different from talking
about the significance of the digits. If you for whatever reason want
to write 1 as 0001, be my guest.

If you state that this somehow expresses a difference in significant
digits, I would like someone to explain how. And yes I want that expressed
in difference of fault or error tolerance, because that is what IMO
significant digits refer to.

>
>> Now between which two numbers is 0001 supposed to be?
> What is your problem? What practical trouble is Python's format method
> giving you?

This is not about Python's format. This is about someone stating a
difference in significant digits.

-- 
Antoon. 





More information about the Python-list mailing list