precision problems in base conversion of rational numbers

Brian van den Broek bvande at po-box.mcgill.ca
Mon Jul 4 07:11:47 EDT 2005


Hi all,

I guess it is more of a maths question than a programming one, but it 
involves use of the decimal module, so here goes:

As a self-directed learning exercise I've been working on a script to 
convert numbers to arbitrary bases. It aims to take any of whole 
numbers (python ints, longs, or Decimals), rational numbers (n / m n, 
m whole) and floating points (the best I can do for reals), and 
convert them to any base between 2 and 36, to desired precision.

I'm pretty close but I know I am not handling the precision quite 
right. Nothing other than understanding hangs on it, but, that's 
enough :-)

To do all this I'm using the decimal module (for the first time) and 
I've been setting the precision with

decimal.getcontext().prec = max(getcontext().prec,
                                 x * self.precision )

This is in my class's __init__ method before I convert every number to 
Decimal type and self.precision is at this point an int passed.

The first term is to be sure not to reduce precision anywhere else. In 
the last term I'd started off with x = 1, and that works well enough 
for "small" cases (i.e. cases where I demanded a relatively low degree 
of precision in the output).

I've no idea how to work out what x should be in general. (I realize 
the answer may be a function of my choice of algorithm. If it is 
needed, I'm happy to extract the relevant chunks of code in a 
follow-up -- this is long already.)

For getcontext.prec = 80 (i.e x = 1) when I work out 345 / 756 in base 
17 to 80 places (i.e. self.precision = 80) I get:

 >>> print Rational_in_base_n(345, 756, 17, 80)
0.7CF0CA7CF0CA7CF0CA7CF0CA7CF0CA7CF0CA7CF0CA7CF0CA7CF0CA7CF0CA7CF0D5C1G603999179EB

(Rational_in_base_n(numerator, denominator, base, precision) is the 
rational specific subclass. When I convert the result back to decimal 
notation by hand it agrees with the correct answer to as many places 
as I cared to check.)

I've discovered empirically that I have to set getcontext.prec = 99 or 
greater (i.e. x >= 1.2375) to get
0.7CF0CA7CF0CA7CF0CA7CF0CA7CF0CA7CF0CA7CF0CA7CF0CA7CF0CA7CF0CA7CF0CA7CF0CA7CF0CA7C

(I do feel safe in assuming that is the right answer. :-)

How would I go about working out what degree of precision for the 
decimal module's context is needed to get n 'digits' of precision in 
an expression of base m in general? (I've no formal training in Comp 
Sci, nor in the relevant branches of mathematics.)

Thanks and best,

Brian vdB




More information about the Python-list mailing list