What is precision of a number representation? (was: Curious Omission In New-Style Formats)

Jan Coombs jenfhaomndgfwutc at murmic.plus.com
Mon Jul 11 19:14:52 EDT 2016


On Tue, 12 Jul 2016 07:51:23 +1000
Chris Angelico <rosuav at gmail.com> wrote:

[snip]
> 
> Yep. Precision is also a property of a measurement, the same
> way that a unit is. If I pace out the length of the main
> corridor in my house, I might come up with a result of thirty
> meters. The number is "30"; the unit is "meters", the
> precision is two significant digits, and the accuracy depends
> on how good I am at pacing distance.
> 
> This is why it's important to be able to record precisions of
> arbitrary numbers. If I then measure the width of this
> corridor with a laser, I could get an extremely precise answer
> - say, 2,147 millimeters, with a precision of four significant
> digits, and excellent accuracy. But if I multiply those
> numbers together to establish the floor area of the corridor,
> the result does NOT have four significant figures. It would be
> 64 square meters (not 64.41), and the accuracy would be pretty
> low (effectively, the *in*accuracies of both measurements get
> combined). But on the other hand, if you want to know whether
> your new fridge will fit, you could measure it with the same
> laser and come up with a figure of 1,973 mm (four sig fig),
> which would mean your clearance is 174mm (four sig fig). How
> do you record this? Is it 174.0? 0174? "174 with four
> significant figures"?

Thees all look good, but you may get into trouble if you trust a
PC with them!

If the language/PC uses floating point representation then it
will assign a fixed number of bits for the fractional part, and
this will be left aligned in all/most hardware. 

This fraction might be 52 bits long. Your example number has
about 11 bits of precision. The floating point representation
will then have ~40 bits appended which imply a precision
which does not exist. Your program may still know that only 11
bits are significant, but the representation implies that 52
bits are significant, and provides no indication otherwise. 

Good news!:  Unum is an alternate numeric representation that
does indicate the precision of a number [1].  It also resolves
other problems of current float representation.

Bad news?:In doing so unums becomes incompatible with current
hardware floating point engines. 

Jan Coombs
-- 
[1] slides:
http://sites.ieee.org/scv-cs/files/2013/03/Right-SizingPrecision1.pdf
RichReport 54 minute interview:
https://youtu.be/jN9L7TpMxeA




More information about the Python-list mailing list