[Python-Dev] Re: Decimal data type issues

Batista, Facundo FBatista at uniFON.com.ar
Tue Apr 20 09:28:48 EDT 2004


[Kevin Jacobs]

#- >[Jewett, Jim J]
#- >#- Under the current implementation:
#- >#- 	(0, (2, 4, 0, 0, 0), -4) 
#- >#- is not quite the same as
#- >#- 	(0, (2, 4) -1)
#- >#- Given this, is should be possible for the user to specify 
#- >#- (at creation) which is desired.
#- >
#- >It *is* posible:
#- >
#- >>>>Decimal('2.4000')
#- >Decimal( (0, (2, 4, 0, 0, 0), -4) )
#- >
#- >>>>Decimal('2.4')
#- >Decimal( (0, (2, 4), -1) )
#- >  
#- 
#- <sarcasm>Great!</sarcasm>.   One of my previous posts 
#- specifically listed
#- that I didn't want to have to pre-parse and reformulate 
#- string literals to
#- achieve the desired precision and scale.  The "external" 

<lost> what? </lost> :p

I still don't understand why do you want that.


#- >If you construct using precision, and the precision is 
#- smaller than the
#- >quantity of digits you provide, you'll get rounded, but if 
#- the precision is
#- >greater than the quantity of digits you provide, you don't 
#- get filled with
#- >zeros.
#- 
#- Rounding is exactly what should be done if one exceeds the desired 
#- precision.  Using
#- less that the desired precision (i.e., not filling in zeros) 
#- may be okay 
#- for many applications.
#- This is because any operations on the value will have to be 
#- performed 
#- with the precision
#- defined in the decimal context.  Thus, the results will be 
#- identical, 
#- other than that the
#- Decimal instance may not store the maximum precision 
#- available by the 
#- schema.

If I don't misunderstand, you're saying that store additional zeroes is
important to your future operations?

Let's make an example.

If I have '2.4000', I go into decimal and get:

>>>Decimal('2.4000')
Decimal( (0, (2, 4, 0, 0, 0), -4) )

If I have '2.4', I go into decimal and get:

>>>Decimal('2.4')
Decimal( (0, (2, 4), -1) )

Are you trying to say that you want Decimal to fill up that number with
zeroes...

>>>Decimal('2.4', scale=4)      # behaviour don't intended, just an example
Decimal( (0, (2, 4, 0, 0, 0), -4) )

...just to represent that you have that precision in your measurements and
reflect that in future arithmetic operations?

If yes, I think that: a) '2.4' and '2.4000' will behaviour identically in
future operations;  b) why do you need to represent in the number the
precision of your measurement?

.	Facundo



More information about the Python-Dev mailing list