[Tutor] built in functions int(),long()+convert.base(r1,r2,num)

cino hilliard hillcino368@hotmail.com
Wed Jun 25 18:51:49 2003


Hi Jeff,
Thanks for the explanations. I realize where you are comming from in terms 
of explaing
how python works. It is interesting and I learned from it. My point though, 
is more to why
certain results happen in relation to what the book says.

Are these statements  true or false.

1. Maybe any integer in the range [2, 36], or zero.

2. If radix is zero, the proper radix is guessed based on the contents of 
string;

The first is true. What about 2.?

>
>>>and there *is* a conversion step in between.  The conversion is 
>>>especially apparent when dealing with floats, because the same float will 
>>>display differently depending on whether you use str() or repr() (the 
>>>interpreter uses repr() by default) --
>>>
>>> >>> repr(0.1)
>>>'0.10000000000000001'
>>
>>Can't this be fixed?
>
>
>Which, that 0.1 can't be represented in binary?  No, that can't be
Try this.
>>>1./3
0.33333333333333331
Why this bogus display?
Here is how Pari solves the problem.
? \p 17
   realprecision = 19 significant digits (17 digits displayed)
? 1./3
0.33333333333333333

>>>Now, is this intended to convert 15 to binary (11111111), or FFFF to 
>>>binary (11111111 11111111 11111111 11111111) ??  There's no way to tell, 
>>>and Python certainly shouldn't be trying to guess.
>>
>>Oh no? The Book says
>>
>>>Maybe any integer in the range [2, 36], or zero. If radix is zero, the 
>>>proper radix is guessed based on the contents of string;

>Note that, by your logic of how these guesses should be done,
My logic? Is this a true statement. Yes or no.
Maybe any integer in the range [2, 36], or zero. If radix is zero,
the proper radix is guessed based on the contents of string;

>that would be the intent, since there is virtually *never* any use for 
>numbers in a nonstandard base.
Mabe not for you.

>
>When 64 bit 128 bit and higher processor chips hit the mainstream you may 
>change your opinion
This was just a hunch based on encoding the processor. With higher radix, 
the instructions could
be crunched in a smaller space.
>
>The number of bits that processors can use has almost zero correlation with 
>the usefulness of number represented in different bases.  We currently have 
>32-bit processors (in most cases), but that doesn't mean we're using base 
>32 numbers for anything.  We've used base 16 numbers since long before 
>16-bit processors were standard.  When 64-bit processors become standard, 
>humans will *not* learn to read base-64 numbers; we'll simply represent 
>processor words with a longer string of hexidecimal digits.
Isn't this backward evolution? Why didn't we just use longer strings of 
octal when we went to 16 bit
processors? Anyway here is a practical example that uses up to base 207.


#                     A practical application of base conversion.
#                                 By Cino Hilliard
#                                    6/24/2003
# This little program demonstrates a practical use of base conversion to 
compress
# base 10 numbers using the ascii set 48 - 255 allowing bases 2 - 207. With 
a little work,
# it can be changed to compress text also. Using the testpi function for 
1000 digits,
# we can determine the compression ratio for various bases. # Eg., base 2 = 
332%,
# base 8 =111%, base 10 =100%, base 100 = 50%, base 207 = 43.2%.
# Perhaps others in the list can tweek to get better compression. It may be 
possible to
# use another character set to say super ascii 511. Processing gets slow as 
we increase
# the number of digits to say 10000. This may be improved by doing 1000 
characters at a
# time getting 10 packets of base 207 to be converted back 1000 at a time. 
Also this could
# be used as an encryption scheme for sensitive data. If you are a Mystic, 
you cal look
# for words or messages in the characters of Pi. Go out far enough and you 
will read the
# Bible word for word. with this. You will have to place the spaces and 
punctuation in
# though. Prime number enthusiasts can use the base converter to find prime 
words or
# phrases.

def testpi(r1,r2,n):
    pi = piasn(n)
    print pi
    print ""
    x = base(r1,r2,pi)
    print x
    print len(x)
    y = base(r2,r1,x)
    print y

def base(r1,r2,num):
    import math
    digits=""
    for j in range(48,255):
          digits = digits + chr(j)
    num = str(num)
    ln  = len(num)
    dec = 0
    for j in range(ln):
          asci = ord(num[j])
          temp   = r1**(ln-j-1)
          ascii2 = asci-48
          dec += ascii2*temp
    RDX = ""
    PWR = math.log(dec)/math.log(r2)
    j = int(PWR)
    while j >= 0:
          Q   = dec/(r2**j)
          dec = dec%(r2**j)
          RDX = RDX + digits[Q]
          j-=1
    return RDX

def pix(n):  # Compute the digits of Pi
    n1 = n*34/10
    m = n+5
    p=d =10**m
    k=1
    while k < n1:
           d = d*k/(k+k+1)
           p = p+d
           k+=1
    p*=2
    p=str(p)
    return p[:n]

def piasn(n):  # My faster version to compute Pi.
    n1=n*7/2+5
    n2=n/2 + 5
    m=n+5
    p=x=10**(m)
    d =1
    while d <= n1:
         x=x*d/(d+1)/4
         p=p+x/(d+2)
         d += 2
    p*=3
    p=str(p)
    return p[:n]

>
>I say *almost* zero correlation, because the reason that hexidecimal is so 
>popular is that a standard byte (8 bits) can be exactly represented using 
>two hex digits.  Every possible 8-bit value can be shown in two hex digits, 
>and every 2-hex-digit value can be shown in 8 binary digits (bits).  Humans 
>typically find '0xE4' easier to read

What is so unappealing to read 1024 decimal as 100 base 32 or 80 base 128. 
Isn't there economy
here say, from an encoding stand point. Sure type it in decimal but let the 
converter encode it in
base 128. how about 18446744073709551617 = 2**64+1 = 2000000001? It just 
seems natural
this trend will continue.

>>>practical.base(10,32,1024)
'100'

>than '11100100', so hex makes a convenient shorthand for looking at bit 
>patterns.  Note that this means that 32 bits are equivalent to 8 hex 
>digits, and 64 bits to 16 hex digits.  Once upon a time, many 
>mainframes/minicomputers used 9-bit, or 18-bit, or 27-bit words.  9 bits 
>have that same mapping to three octal digits, so for these machines, octal 
>is the convenient shorthand.  As that type of machine passes out of favor, 
>octal is passing out of favor now, too.
>
>The point of this diversion is simply to show that unusual bases are 
>extremely rare, and serve very little practical purpose, which is
Well that means they are extremely valuable. We will see.

>*why* the Python interpreter is biased towards a few specific bases.
Not really. It allows decimal up to base 36 conversion.


Will python ever become a compiler capable of compiling itself?

Will python ever have arbitrary precision floating point built in like the 
Pari, mathematica, Maple
interpreters?

>
>Jeff Shannon
>Technician/Programmer
>Credit International
>
>
>
>_______________________________________________
>Tutor maillist  -  Tutor@python.org
>http://mail.python.org/mailman/listinfo/tutor

_________________________________________________________________
Protect your PC - get McAfee.com VirusScan Online  
http://clinic.mcafee.com/clinic/ibuy/campaign.asp?cid=3963