Python -- floating point arithmetic

Adam Skutt askutt at gmail.com
Thu Jul 8 13:22:29 EDT 2010


On Jul 8, 12:38 pm, "Zooko O'Whielacronx" <zo... at zooko.com> wrote:
> On Thu, Jul 8, 2010 at 4:58 AM, Adam Skutt <ask... at gmail.com> wrote:
>
> > I can't think of any program I've ever written where the inputs are
> > actually intended to be decimal.  Consider a simple video editing
> > program, and the user specifies a frame rate 23.976 fps.  Is that what
> > they really wanted?  No, they wanted 24000/1001 but didn't feel like
> > typing that.
>
> Okay, so there was a lossy conversion from the user's intention
> (24000/1001) to what they typed in (23.976).
>
> >>> instr = '23.976'
>
> Now as a programmer you have two choices:
>
> 1. accept what they typed in and losslessly store it in a decimal:
>
> >>> from decimal import Decimal as D
> >>> x = D(instr)
> >>> print x
>
> 23.976
>
> 2. accept what they typed in and lossily convert it to a float:
>
> >>> x = float(instr)
> >>> print "%.60f" % (x,)
>
> 23.975999999999999090505298227071762084960937500000000000000000
>
> option 2 introduces further "error" between what you have stored in
> your program and what the user originally wanted and offers no
> advantages except for speed, right?

No, you have a third choice, and it's the only right choice:
3. Convert the input to user's desired behavior and behave
accordingly.  Anything else, here, will result in A/V sync issues.
Which is really my point, just because we write '23.976' on the
command-line doesn't necessarily mean that's what we meant.  Humans
are pretty lazy, and we write rational numbers as incomplete decimals
all of the time.

> But this is not a disadvantage of decimal compared to float is it?
> These problems affect both representations. Although perhaps they
> affect them differently, I'm not sure.
>
> I think sometimes people conflate the fact that decimals can easily
> have higher and more variable precision than floats with the fact that
> decimals are capable of losslessly storing decimal values but floats
> aren't.
>
No, it's not a specific disadvantage of decimal compared to float.
I'm not sure why David C. choose to phrase it in those specific terms,
though I'm not sure it matters all that much.  What I believe is one
must understand that the underlying issues are fundamental, and the
only way to solve the issues is to educate programmers so they can
write code that behaves correctly in the face of rounding.  And I do
believe you're correct that programmers frequently see one desirable
behavior of decimal FP over binary FP and therefore assume all the
badness must have gone away.

Adam




More information about the Python-list mailing list