Python -- floating point arithmetic

Adam Skutt askutt at gmail.com
Thu Jul 8 10:29:41 EDT 2010


On Jul 8, 9:22 am, Mark Dickinson <dicki... at gmail.com> wrote:
> On Jul 8, 2:00 pm, Adam Skutt <ask... at gmail.com> wrote:
>
> > On Jul 8, 7:23 am, Mark Dickinson <dicki... at gmail.com> wrote:> On Jul 8, 11:58 am, Adam Skutt <ask... at gmail.com> wrote:
>
> > > > accurately.  Moreover, in general, it's impossible to even round
> > > > operations involving transcendental functions to an arbitrary fixed-
> > > > precision, you may need effectively infinite precision in order to the
> > > > computation.
>
> > > Impossible?  Can you explain what you mean by this?  Doesn't the
> > > decimal module do exactly that, giving correctly-rounded exp() and
> > > log() results to arbitrary precision?
>
> > You run into the table-maker's dilemma: there's no way to know in
> > advance how many digits you need in order to have n bits of precision
> > in the result.
>
> Sure.  But it's a bit of a stretch to go from not knowing what
> resources you'll need in advance to calling something 'impossible'. :)
>
> > For some computations, the number of bits required to
> > get the desired precision can quickly overwhelm the finite limitations
> > of your machine (e.g., you run out of RAM first or the time to compute
> > the answer is simply unacceptable).
>
> Perhaps in theory.  In practice, though, it's very rare to need to
> increase precision more than once or twice beyond an initial first
> guesstimate, and the amount of extra precision needed is small.  That
> increase is unlikely to cause problems unless you were operating right
> up against your machine's limits in the first place.
I suspect your platitude isn't especially comforting for those who
need more computing capability than we can currently construct.
However, I wouldn't call the amount of extra needed precision "small"
for most transcendental functions, as it's frequently more than double
in the worse-case situations and increases non-linearly as the number
of desired digits increases.

Which almost brings us full circle to where I was originally pointing:
the "rounding" problem is inherent in the finite nature of a physical
computer, so you cannot make the rounding problem go away.  As such,
talking about differences in rounding between decimal and binary
representations is somewhat of a corner case.  Replacing "float" with
"decimal" won't get rid of the problems that floating-point brings to
the table in the first place.  The issues that come up all have to do
with lack of human understanding of what the computer is doing.  Take
even as something as innocent as equality between two floating-point
numbers: even exact rounding of all operations doesn't solve this
entirely common problem.  Accordingly, once we explain why this
doesn't work, we frequently don't need the enhanced functionality
decimal provides and hopefully can make the determination on our own.

If you want to make elementary arithmetic (add, subtract, multiple,
divide) behave intuitively then you (arguably) want an arbitrary-
precision fractional/rational number class.  After that, the right
solution is education.

Adam



More information about the Python-list mailing list