What is precision of a number representation?

Marko Rauhamaa marko at pacujo.net
Tue Jul 12 06:27:27 EDT 2016


Antoon Pardon <antoon.pardon at rece.vub.ac.be>:

> Op 12-07-16 om 06:19 schreef Steven D'Aprano:
>> How do you represent 1 mm to a precision of four significant digits,
>> in such a way that it is distinguished from 1 mm to one significant
>> digit, and 1 mm to a precision of four decimal places?
>
> Your question doesn't has an answer because 1 mm doesn't have a
> precision of four significant digits.

Your statement is invalid. You presuppose some notational conventions.
1 mm is simply one millimeter; in and of itself it doesn't in any way
convey precision.

> A precision is an indication of a fault tolerance. You don't indicate
> less fault tolerace by writing it as 0001.

I doubt a "fault" is relevant here. In ordinary usage, precision refers
to a range of values, which is probably what you are getting at.

> Please explain how 0001 represants a difference in precision than just
> 1.

"Precision" here is not ordinary usage. Historically, it comes from the
printf(3) library function:

    Each conversion specification is introduced by the character %,
    and ends with a conversion specifier. In between there may be (in
    this order) zero or more flags, an optional minimum field width,
    an optional precision and an optional length modifier.
    [man 3 printf]

Thus, "precision" is simply the name of a formatting field, regardless
of the semantics of that field. The name was chosen because for floating
point numbers, it actually refers to the precision of the numeric
representation. However, the field has other uses that have nothing to
do with precision:

    Precision [...] gives the minimum number of digits to appear for d,
    i, o, u, x, and X conversions, the number of digits to appear after
    the radix character for a, A, e, E, f, and F conversions, the
    maximum number of significant digits for g and G conversions, or the
    maximum number of characters to be printed from a string for s and S
    conversions.
    [man 3 printf]

> Writing 1.0000 instead of 1 can be understood as the actual number
> being between 0.99995 and 1.00005 instead of the actual number being
> between 0.95 and 1.05.

That certainly is a very common practice.

> Now between which two numbers is 0001 supposed to be?

What is your problem? What practical trouble is Python's format method
giving you?

Do you ever "turn on" a light or "roll down" a window by pushing a
button? Do you "turn up" the volume by moving a slider? Why, do you ever
"write" to a solid state "disk?"


Marko



More information about the Python-list mailing list