Floating point calculation problem

Steven D'Aprano steve+comp.lang.python at pearwood.info
Sat Feb 2 07:05:31 EST 2013


Schizoid Man wrote:

> I have a program that performs some calculations that runs perfectly on
> Python 2.7.3. However, when I try to execute it on Python 3.3.0 I get the
> following error:
>     numer = math.log(s)
> TypeError: a float is required
> 
> The quantity s is input with the following line: s = input("Enter s:   ")
> 
> To get rid of the compile error, I can cast this as a float: 
> s = float(input("Enter s:   "))
> 
> However, then the result returned by the method is wrong. Why does this
> error occur in version 3.3.0 but not in 2.7.3? Why is the result incorrect
> when s is cast as a float (the casting is not required in 2.7.3)?


Others have already discussed the differences between input() in Python 2
and 3, but there's another difference that could be causing you to get the
wrong results: division in Python 2 defaults to *integer* division.

If you type a value like "3" (with no decimal point) into input, you will
get the int 3, not the float 3.0. Then if you divide by another integer, by
default you will get truncating integer division instead of what you
probably expect:


>>> num = input('Enter a value: ')
Enter a value: 3
>>> print num/2
1


Whereas if you type it with a decimal point, input() will turn it into a
float, and you will get float division:

>>> num = input('Enter a value: ')
Enter a value: 3.0
>>> print num/2
1.5

This does not happen in Python 3.x -- you always get floating point
division, even if both the numerator and denominator are ints.


You can fix this, and get the proper calculator-style floating point
division, in Python 2 by putting this line at the very top of your script:

from __future__ import division


This must appear before any other line of code; it can follow comments and
blank lines, but not code.



-- 
Steven




More information about the Python-list mailing list