[Python-Dev] (no subject)

Michael Chermside mcherm at mcherm.com
Thu Nov 10 19:12:03 CET 2005


Sokolov Yura writes:
> Excuse my English

No problem. You command of english probably exceeds my command of any
other language.

> I think, we could just segregate tokens for decimal and real float and
> make them interoperable.
>    Most of us works with business databases - all "floats" are really
> decimals, algebraic operations
> should work without float inconsistency and those operations rare so
> speed is not important.
> But some of us use floats for speed in scientific and multimedia programs.

I'm not sure why you say "most" (have you seen some surveys of Python
programmers that I haven't seen?), but I think we all agree that there
are Python users who rarely have a need for machine floats, and others
who badly need them.

I'll take your specific suggestions out of order:
> with "from __future__ import Decimal" we could:
> c) result of operation with decimal operands should be decimal
>  >>> 1.0/3.0
> 0.33333333333333333

This already works.

> d) result of operation with float operands should be float
>  >>> 1.0f/3.0f
> 0.33333333333333331f

This already works.

> e) result of operation with decimal and float should be float (decimal
> converts into float and operation perfomed)
>  >>> 1.0f/3.0
> 0.33333333333333331f
>  >>> 1.0/3.0f
> 0.33333333333333331f

Mixing Decimal and float is nearly ALWAYS a user error. Doing it correctly
requires significant expertise in the peculiarities of floating point
representations. So Python protects the user by throwing exceptions when
attempts are made to mix Decimal and floats. This is the desired
behavior (and the experts already know how to work around it in the RARE
occasions when they need to).

> a) interpret regular float constants as decimal
> b) interpret float constants with suffix 'f' as float (like    1.5f
> 345.2e-5f  etc)

There are two different ideas here, which I will separate. The first
is a proposal that there be a way to provide Decimal literals. The second
proposal is that the ###.### be the literal for Decimals and that
###.###f be the literal for floats.

I'm in favor of the first idea. Decimals are useful enough that it would
be a good idea to provide some sort of literal for their use. This is
well worth a PEP. But if we DO agree that we ought to have literals for
both floats and Decimals, then we also need to decide which gets the
coveted "unadorned decimal literal" (ie, ###.###). Performance argues
in favor of floats (they run *MUCH* faster). Usability (particularly
for beginners) argues in favor of Decimals (they sometimes still have
surprising behavior, but less often than with binary floats). And
backward compatibility argues in favor of floats. Myself, I'm an
"expert" user (at least to this extent) and I could easily handle
either choice. If others felt like me, then its likely that the
backward compatibility argument and the need to fight the pervasive
meme that "Python is slow" will win the day.

-- Michael Chermside



More information about the Python-Dev mailing list