Setting longer default decimal precision

Kay Y. Jheallee nospam at use.net
Mon Nov 18 09:14:33 EST 2013


Using 1/3 as an example,

 >>> 1./3
0.3333333333333333
 >>> print "%.50f" % (1./3)
0.33333333333333331482961625624739099293947219848633
 >>> print "%.50f" % (10./3)
3.33333333333333348136306995002087205648422241210938
 >>> print "%.50f" % (100./3)
33.33333333333333570180911920033395290374755859375000

which seems to mean real (at least default) decimal precision
is limited to "double", 16 digit precision (with rounding error).
Is there a way to increase the real precision, preferably as
the default?
For instance, UBasic uses a "Words for fractionals", f, 
"Point(f)" system, where Point(f) sets the decimal display 
precision, .1^int(ln(65536^73)/ln(10)), with the last few digits 
usually garbage.
Using "90*(pi/180)*180/pi" as an example to highlight the 
rounding error (4 = UBasic's f default value):

  Point(2)=.1^09: 89.999999306
  Point(3)=.1^14: 89.9999999999944
  Point(4)=.1^19: 89.9999999999999998772
  Point(5)=.1^24: 89.999999999999999999999217
  Point(7)=.1^33: 89.999999999999999999999999999999823
Point(10)=.1^48: 89.999999999999999999999999999999999999999999997686
Point(11)=.1^52: 
89.9999999999999999999999999999999999999999999999999632

If not in the core program, is there a higher decimal precision 
module that can be added?

-- 
Kill Hector dead, because Desi sent Milli.



More information about the Python-list mailing list