[Python-ideas] Python Numbers as Human Concept Decimal System

Ron Adam ron3200 at gmail.com
Tue Mar 11 22:44:58 CET 2014



On 03/11/2014 07:33 AM, M.-A. Lemburg wrote:
> On 11.03.2014 12:46, Nick Coghlan wrote:
>> >...[Oscar's proposal being that picking just one decimal floating
>> >     point precision as*the*  floating point precision]...
>> >
>> >That approach buys us a couple of key benefits:
>> >
>> >- they would be true constants, so constant folding, etc, would work
>> >normally, and you could tell from reading the code exactly what it
>> >will do, without needing to worry about the impact of thread local
>> >state
>> >- you could generally use the builtin decimal literals without
>> >learning anything about decimal contexts, because decimal literals
>> >wouldn't be context dependent (although their behaviour would be
>> >formally defined in terms of an IEEE decimal context, for most users
>> >it would just be "this is how Python decimal literals behave")
>> >
>> >The guidance to new users would then be*don't*  use the decimal
>> >module, use decimal literals instead. If you need more configurable
>> >behaviour,*then*  reach for the decimal module. However, the explicit
>> >methods on decimal context objects should probably still be updated to
>> >accept both fixed and variable precision decimals under this model.
> I think you are leaving out one of the most important use cases
> for decimal values: data input and output.
>
> The literals would only appear in programs. However, in most use
> cases, you want to read decimal data from some source, process it
> and then write it back again. This would most likely also require
> using a few decimal literals, but the main source of decimals
> would still be constructors that you choose when writing the
> tools.
>
> Coming back to floats and the decimal constructor:
>
> In investment banking, for example, people usually work with floats
> all the time. You only convert back to decimals at the very end of some
> calculation. The reason here being either that the calculations
> involve complex operations which are not available for decimals,
> trying to keep error intervals small, or a combination of both.
> In general, you try to use as much precision as you can afford
> (in terms of memory and speed), to keep those error intervals
> small.
>
> In accounting, you often use decimals for storing data with
> an (usually contractually or law based) agreed upon precision
> and rounding logic. However, there situations where you have
> to go to floats as well in order to run calculations, e.g.
> for interest, taxes, etc.
>
> In both situations, you want to have the decimal constructor
> take the float values with full precision and only then apply
> the necessary rounding to turn the value into a form which
> complies with the agreed upon rules. It's also not uncommon to
> add correctional bookings to address rounding issues explicitly
> (e.g. when calculating VAT of a large number of individual
> items).
>
> In short: you try to prevent rounding from happening as
> much as possible and when using it, you use it in a
> controlled way.
>
> Based on this, the choice to have the decimal constructor
> use full precision when reading floats is a good one, even
> though it may not feel right for the novice, the casual
> decimal user or as human concept:-)
>
> For decimal literals, I'd argue that if you enter a
> value 1.2500d, you are expecting a decimal with 4 decimal
> places precision, not 64 or 128 bits:-)
>
> The information about the intended precision is implicit in the
> literal. You see this done in exchange rates, stock prices,
> prices at your grocery store, etc.
>
> The decimals can then be transformed into ones with higher
> precision during calculations and then possibly back to lower
> precision, but this is an explicit decision by the system doing
> the calculation.
>
> Anyway, just throwing in some additional entropy into this
> discussion. Probably not all that helpful, since you're already
> converging on two possible solutions:-)

I like django's measurement object approach.

https://docs.djangoproject.com/en/dev/ref/contrib/gis/measure/

It just needs a way to track accuracy (significant digits).  Which can be 
independent of the underlying numeric type.  (I think django has that too.)

Possibly, it could promote it's underlying type (int, float, decimal) if it 
needs to, or raise an exception if the underlying type isn't accurate 
enough to provide the requested precision.

The same concept could be used for currency calculations, or time for that 
matter.

This would do much more than adding a new underlying type to python.  It 
encapsulates expert knowledge into objects that can be used by user who's 
expertise may be in other areas.

The actual underlying numeric type could depend on the amount of accuracy 
needed rather than some preconceived notion of greater accuracy that 
probably isn't needed in most cases.  Ie.. Decimal128 for calculations that 
only require 6 significant digits.

Cheers,
    Ron



More information about the Python-ideas mailing list