What is precision of a number representation?

Steven D'Aprano steve at pearwood.info
Tue Jul 12 09:16:54 EDT 2016


On Tue, 12 Jul 2016 07:50 pm, Antoon Pardon wrote:

> Op 12-07-16 om 06:19 schreef Steven D'Aprano:
>> On Tue, 12 Jul 2016 07:51 am, Chris Angelico wrote:
>>
>>> say, 2,147
>>> millimeters, with a precision of four significant digits
>>
>> How do you represent 1 mm to a precision of four significant digits, in
>> such a way that it is distinguished from 1 mm to one significant digit,
>> and 1 mm to a precision of four decimal places?
>>
>> 0001
>> 1
>> 1.0000
> 
> Your question doesn't has an answer because 1 mm doesn't have a precision
> of four significant digits.

None of this argument has even the slightest relevance to the question of
what the so-called "precision" field in a format string means with integer
arguments. Regardless of any consensus, or lack thereof, about what a
measurement precision of "four significant digits" might mean for a
measurement of 1 unit, the meaning and usefulness of the precision field in
format strings will remain. How do you get this result with format?

py> "%8.4d" % 29
'    0029'


If you don't like my interpretation of leading zeroes, okay, I'm not going
to defend it. It's really not that important. Perhaps after thinking about
it more I'll also decide that it doesn't really make sense.

When it comes to printf and the string % operator with integer types, the
so-called "precision" field should not be understood as anything to do with
measurement error.


> A precision is an indication of a fault tolerance. You don't indicate less
> fault tolerace by writing it as 0001.

Measurement precision, as indicated by significant digits, doesn't have
anything to do with fault tolerance. It tells you the estimated error in
the measurement, not how much fault your application can withstand. "Fault
tolerance" is a technical term that refers to the ability of a system,
machine or device to continue working correctly even when parts of it have
broken down.


> Please explain how 0001 represants a difference in precision than just 1.

The first is a 16-bit word; the second is a 4-bit nybble.

Or, if the numbers are in decimal, perhaps the first comes from an old
analogue speedometers, where there are four counters. This tells you that
the largest possible number that we could count is 9999. The second in that
case could come from a single analogue counter, in which case the largest
number is 9.



-- 
Steven
“Cheer up,” they said, “things could be worse.” So I cheered up, and sure
enough, things got worse.




More information about the Python-list mailing list