[Tutor] Unexpected result from decimal

Jacob S. keridee at jayco.net
Fri Jan 21 15:43:51 CET 2005


Okay, so how do I get decimal to set precision of *significant digits*?
Why have a decimal.getcontext().prec if it doesn't provide a useful result?
The number of digits in a number is irrelevant to that numbers value.
It just doesn't make sense to me.
I tried quantize the other day and it didn't work -- gave some sort of 
error,
I don't remember what it was, and I can't get it to do it again today.
So, I guess it works now and I can go back to wondering why
getcontext().prec = a
is not equal to
quantize("0."+"0"*(a-1)+"1")

Thanks,
Jacob
Comments to everyone's post below.

>> >>> import decimal
>> >>> decimal.getcontext().prec = 2
>> >>> a = decimal.Decimal(2)
>> >>> b = decimal.Decimal(3)
>> >>> 100*a/b
>> Decimal("67")
>> >>> print 100*a/b
>
> This prints "67".
>
>> try -
>>
>> a=decimal.Decimal(2.0)
>
> This will not work.  You can't convert a float directly to a 
> decimal.Decimal
> (I believe this is so that you are forced to understand that there are
> precision issues involved).  'a = decimal.Decimal("2.0")' will do what you
> meant, though.

Decimal is a totally different type from integers and floats. It is not 
affected by float
division problems. It is not my type, it is in the standard distribution of 
python 2.4
You have to represent floats as strings, because if you don't, then a float 
with lost precision is
used instead of the exact precision.
For example -- 
If I pass 1.1 to decimal.Decimal, it will receive 1.10000000001 or something 
like that so it can try to
keep perfect precision on a float that started from the beginning without 
it. This is what decimal is
trying to resolve in the first place!
However, if I pass a string, strings aren't affected by binary floating 
point problems and they can
be kept in perfect precision from the beginning.

>> b = decimal.Decimal(3)
>> print 100*a/b
>
> However, this will print out "67", just as the above did.  The reason is 
> the
> one that Tim outlined: precision isn't the number of digits after the
> decimal place - when I was at school the latter was called "decimal 
> places"
> and precision was "significant digits".
>
>> Jacob- one slight flaw/quirk in Python is if you want floating point
>> computations you have to specify a floating point.
> [...]
>> Same as writing 100/3.0 as opposed to 100/3. Try it.
>
> Note that you can do 'from __future__ import division' and 100/3 will be 
> the
> same as 100/3.0 (and 100//3 will give you 3).

See above

> =Tony.Meyer



More information about the Tutor mailing list