[Python-ideas] Python Float Update

Adam Bartoš drekin at gmail.com
Thu Jun 4 14:52:10 CEST 2015


Thank you very much for a detailed explanation.

Regards, Drekin

On Wed, Jun 3, 2015 at 10:17 PM, Andrew Barnert <abarnert at yahoo.com> wrote:

> On Jun 3, 2015, at 07:29, drekin at gmail.com wrote:
> >
> > Stephen J. Turnbull writes:
> >
> >> Nick Coghlan writes:
> >>
> >>> the main concern I have with [a FloatLiteral that carries the
> >>> original repr around] is that we'd be trading the status quo for a
> >>> situation where "Decimal(1.3)" and "Decimal(13/10)" gave different
> >>> answers.
> >>
> >> Yeah, and that kills the deal for me.  Either Decimal is the default
> >> representation for non-integers, or this is a no-go.  And that isn't
> >> going to happen.
> >
> > What if also 13/10 yielded a fraction?
>
> That was raised near the start of the thread. In fact, I think the initial
> proposal was that 13/10 evaluated to Fraction(13, 10) and 1.2 evaluated to
> something like Fraction(12, 10).
>
> > Anyway, what are the objections to integer division returning a
> fraction? They are coerced to floats when mixed with them.
>
> As mentioned earlier in the thread, the language that inspired Python,
> ABC, used exactly this design: computations were kept as exact rationals
> until you mixed them with floats or called irrational functions like root.
> So it's not likely Guido didn't think of this possibility; he deliberately
> chose not to do things this way. He even wrote about this a few years ago;
> search for "integer division" on his Python-history blog.
>
> So, what are the problems?
>
> When you stay with exact rationals through a long series of computations,
> the result can grow to be huge in memory, and processing time. (I'm
> ignoring the fact that CPython doesn't even have a fast fraction
> implementation, because one could be added easily. It's still going to be
> orders of magnitude slower to add two fractions with gigantic denominators
> than to add the equivalent floats or decimals.)
>
> Plus, it's not always obvious when you've lost exactness. For example,
> exponentiation between rationals is exact only if the power simplifies to a
> whole fraction (and hasn't itself become a float somewhere along the way).
> Since the fractions module doesn't have IEEE-style flags for
> inexactness/rounding, it's harder to notice when this happens.
>
> Except in very trivial cases, the repr would be much less human-readable
> and -debuggable, not more. (Or do you find 1728829813 / 2317409 easier to
> understand than 7460.181958816937?)
>
> Fractions and Decimals can't be mixed or interconverted directly.
>
> There are definitely cases where a rational type is the right thing to use
> (it wouldn't be in the stdlib otherwise), but I think they're less common
> than the cases where a floating-point type (whether binary or decimal) is
> the right thing to use. (And even many cases where you think you want
> rationals, what you actually want is SymPy-style symbolic
> computation--which can give you exact results for things with roots or sins
> or whatever as long as they cancel out in the end.)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150604/e6d7271a/attachment.html>


More information about the Python-ideas mailing list