[Python-Dev] Re: Decimal data type issues

Kevin Jacobs jacobs at theopalgroup.com
Tue Apr 20 10:00:40 EDT 2004


Batista, Facundo wrote:

>[Kevin Jacobs]
>#- <sarcasm>Great!</sarcasm>.   One of my previous posts 
>#- specifically listed
>#- that I didn't want to have to pre-parse and reformulate 
>#- string literals to
>#- achieve the desired precision and scale.  The "external" 
>
><lost> what? </lost> :p
>
>I still don't understand why do you want that.
>
>  
>
It seems that Jim and I want to be able to easily create Decimal instances
that conform to a pre-specified (maximum) scale and (maximum) precision.
The motivation for this is clearly explained in that section of the 
PostgeSQL
manual that I sent the other day.  i.e., numeric and decimal values in SQL
are specified in terms of scale and precision parameters.  Thus, I would
like to create decimal instances that conform to those schema -- i.e., they
would be rounded appropriately and overflow errors generated if they
exceeded either the maximum precision or scale.  e.g.:

  Decimal('20000.001', precision=4, scale=0) === Decimal('20000')
  Decimal('20000.001', precision=4, scale=0) raises an overflow exception
  Decimal('20000.001', precision=5, scale=3) raises an overflow exception
  Decimal('200.001', precision=6, scale=3) === Decimal('200.001')
  Decimal('200.000', precision=6, scale=3) === Decimal('200') or 
Decimal('200.000')
     (depending on if precision and scale are interpreted as absolutes or
      maximums)

In order to be able to accomplish this behavior in an "external" library,
either the literals would have to be pre-parsed and manipulated, OR
an intermediate Decimal value would be created using the raw literal,
which would then used to detect overflows and then apply the
necessary rounding criteria based on the desired (maximum) scale.

Hopefully this is somewhat clearer.

>If I don't misunderstand, you're saying that store additional zeroes is
>important to your future operations?
>
>  
>
Not for mine.  I would be content with interpreting the scale and 
precision parameters
as maximums rather than absolutes.  However, it is important to poll 
other users,
since their applications may be less forgiving.

>>>>Decimal('2.4', scale=4)      # behaviour don't intended, just an example
>>>>        
>>>>
>Decimal( (0, (2, 4, 0, 0, 0), -4) )
>
>...just to represent that you have that precision in your measurements and
>reflect that in future arithmetic operations?
>
>If yes, I think that: a) '2.4' and '2.4000' will behaviour identically in
>future operations;  b) why do you need to represent in the number the
>precision of your measurement?
>
>  
>
Neither.  It is well understood that operations on Decimal instances 
must rely on the
context.  The idea here is to overflow and round correctly upon instance 
creation without
going through a great deal of additional effort.

Thanks,
-Kevin




More information about the Python-Dev mailing list