[Numpy-discussion] Re: numpy, overflow, inf, ieee, and rich , comparison

Paul Prescod paulp at ActiveState.com
Wed Oct 25 01:01:03 EDT 2000


Donald O'Donnell wrote:
> 
> ....
> 
> .... Every
> main-stream language I've ever used (Fortran, COBOL, Basic,
> C, C++, Java,...) have all truncated the result of integer
> division to an integer, and that can be very useful.  ...

> If you really want a floating point
> result in the above example, all you need to do is use a/2.0
> -- see, you are in control this way, not the compiler

In either case you are in control. If Guido changes the default to be
float division you can get integer division by wrapping the result in a
floor(). The question is simply whether the default should be optimized
for experienced programmers coming from other languages like you or the
millions of people who are new to programming. You're likely to lose
that beauty contest.

Regardless, several people have made the point that in a dynamically
typed language people are much more likely to be bitten with this bug
because when they type "1" they aren't thinking about whether it is
interpreted as a float or an integer. The language goes out of its way
so that you often don't have to care (e.g. 5.0/2 == 2.5, not TypeError).

> ...
> I submit, that if you ever find it desirable to have your
> compiler automatically convert your ints to floats at some
> time during a calculation, then you should have been dealing
> with floats exclusively from the start.  

You'll have to go way back in history to impose this point of view. C
happily coerces integers to floats in some circumstances. Python will
someday just choose a different set of circumstances.

> Do you mean you would prefer:
>   (int) OP (int) => (int) sometimes and other times (float)
> Or do you mean:
>   (int) OP (int) => (float) always?
> I think the first case would be confusing and the second
> limiting.

The first would not be confusing if you think of integers as a special
case of floats, as you are taught to in high school. To a student, the
output of this program is somewhat confusing:

#include <stdio.h>

void main(void){
    printf("%f\n",5/2);
    printf("%f\n",5.0/2);
    printf("%f\n",5/2.0);
}

0.000000
2.500000
2.500000

Why are integers sometimes treated as a special case of float and
sometimes not?

 Paul Prescod




More information about the Python-list mailing list