[Python-ideas] Python Numbers as Human Concept Decimal System

Oscar Benjamin oscar.j.benjamin at gmail.com
Tue Mar 11 11:51:28 CET 2014


On 11 March 2014 07:24, Antoine Pitrou <solipsis at pitrou.net> wrote:
> On Tue, 11 Mar 2014 09:02:31 +1000
> Nick Coghlan <ncoghlan at gmail.com> wrote:
>>
>> My take is that we're down to two main options:
>>
>> Stefan: use the existing decimal type, change default context to Decimal64,
>> round all decimal literals to current context
>>
>> Oscar: new fixed width decimal64 builtin type, always uses Decimal64
>> context (allowing constant folding), interoperates with variable context
>> decimal.Decimal values (producing a decimal.Decimal result)
>>
>> I lean towards Oscar's proposal, as it removes the hidden context dependent
>> behaviour and makes the builtin decimals true constant values.
>
> Yuck. Is this some kind of joke? We've gone through the trouble of
> unifying long and int in Python 3 and now people are proposing two
> different Decimal types, one with fixed precision and one with
> arbitrary precision?
>
> I'm completely -1 on this.

I understand your objection Antoine. There is a big difference though
between integer and non-integer types in this respect though. For
integers is is not hard to define a single integer type that satisfies
the vast majority of use-cases (focusing purely on semantics rather
than detailed implementation aspects and performance). The only
contentious issue is how to handle inexact division. int and long were
not really "unified" in Python 3. The long type *replaced* the int
type as it has those universal integer semantics that are really
wanted.

When it comes to non-integer types there's just no single good way of
doing it. So just in the stdlib we already have float, Fraction and
Decimal. The additional libraries that I use have many more numeric
types than that. The idea here is that for most users decimal128 just
is *the* decimal type. The decimal.Decimal type is IMO overly complex
and should really be for niche use-cases like other non-stdlib
multi-precision numeric types.

Java's BigDecimal is more user friendly in some important ways. It
performs all calculations exactly until you explicitly round. When you
ask it to do something that must be inexact without explicitly
supplying a rounding context it throws an exception. Naturally it is
more verbose than calculations with decimal.Decimal but it is possible
to always know what's going on just by looking at the relevant code.

C# has a (non-standard I think) fixed-width decimal type. Once again
the precision and exponent range of this type are not governed by a
hidden modifiable arithmetic context so it is possible to know what a
snippet of code does before run-time.

C++ is introducing decimal32, decimal64, and decimal128 as fixed-width
decimal floating point types as according to IEEE-754-2008. These will
be fixed-width types whose precision is always known so that a static
analysis can explain what code does (excepting traps and flags - I'm
not clear on how they will work in C++).

The problem as I see it is that if Decimal is brought into the core of
the language in its current format then many people will be confused
by its behaviour. The behaviour of Python's Decimal type is governed
by the hidden global variable that is the arithmetic context so that
even trivial looking code cannot fully be understood without thinking
through all the possible contexts that might be in effect when it is
executed. I've used a few multi-precision libraries and I find Decimal
hard to use because of the action at a distance effect of the contexts
and the fact that it is possible for Decimals to exist whose precision
exceeds the context so that even the expression -x is subject to
rounding.

The idea is here is to make decimal floating point accessible for
non-expert users. I'm concerned that the current Decimal type is
overly complex for this use-case.


Oscar


More information about the Python-ideas mailing list