[Python-ideas] Python Float Update

Andrew Barnert abarnert at yahoo.com
Mon Jun 1 08:39:34 CEST 2015


On May 31, 2015, at 20:37, random832 at fastmail.us wrote:
> 
> Also, it raises a question of string representation. Granted, "1/3"
> becomes much more defensible as the repr of Fraction(1, 3) if it in fact
> evaluates to that value, but how much do you like "6/5" as the repr of
> 1.2? Or are we going to use Fractions for integer division and Decimals
> for literals?

That's the big problem. There's no one always-right answer.

If you interpret the literal 1.20 a Fraction, it's going to be more confusing, not less, to people who are just trying to add up dollars and cents. Do a long financial computation and, instead of $691.05 as you expected or $691.0500000237 as you get today, you've got 10215488088 / 14782560. Not to mention that financial calculations often tend to involve things like e or exponentiation to non-integral powers, and what happens then? And then of course there's the unbounded size issue. If you do a long chain of operations that can theoretically be represented exactly followed by one that can't, you're wasting a ton of time and space for those intermediate values (and, unlike Haskell, Python can't look at the whole expression in advance and determine what the final type will be).

On other other hand, if you interpret 1.20 it as a Decimal, now you can't sensibly mix 1.20 * 3/4 without coming up with a rule for how decimal and fraction types should interact. (OK, there's an obvious right answer for multiplication, but what about for addition?)

And either one leads to people asking why the code they ported from Java or Ruby is broken on Python.

You could make it configurable, so integer division is your choice of float, fraction, or decimal and decimal literals are your separate choice of the same three (and maybe also let fraction exponentiation be your choice of decimal and float), but then which setting is the default? Also, where do you set that? It has to be available at compile time, unless you want to add new types like "decimal literal" at compile time that are interpreted appropriately at runtime (which some languages do, and it works, but it definitely adds complexity).

Maybe the answer is just to make it easier to be explicit, using something like C++ literal suffixes, so you can write, e.g., 1.20d or 1/3f (and I guess 1.2f) instead of Decimal('1.20') or Fraction(1, 3) (and Fraction(12, 10)).

> And, what of decimal division? Right now you can't even
> mix Fraction and Decimal in arithmetic operations.
> 
> And are we going to add %e %f and %g support for both types? Directly
> so, without any detour to float and its limitations (i.e. %.100f gets
> you 100 true decimal digits of precision)?

At least here I think the answer is clear. %-substitution is printf-like, and shouldn't change. If you want formatting that can be overloaded by the type, you use {}, which already works.



More information about the Python-ideas mailing list