Perl is worse!

Grant Edwards ge at nowhere.none
Fri Jul 28 10:27:58 EDT 2000


In article <slrn8o2ft4.1e2.grey at teleute.rpglink.com>, Steve Lamb wrote:

>>Steve, in case it isn't clear yet, Python programmers *want* to
>>be blown out of the water when doing something as senseless as
>
>>    1 + "foo"
>
> It isn't senseless.  That is the whole point.  It is only
> senseless because of typing.  Clearly you cannot add a word to
> a number, granted.  But what of 1 + "1"?  That isn't senseless,
> those are two numbers.  I can see they are two numbers, it is
> only because of typing that it fails.

It is only because of typing that all programs fail.

If I typed "1", that means I wanted a word, a printible string,
and _not_ an integer.  If I wanted an integer I would type 1
instead of "1".  If I want to convert a string to an integer or
an integer to a float or a float to a string, then _I_ will do
it. I do _not_ want the language to make WAGs about what I
meant when I typed something.

I meant what I typed.  If not, I'll go back and fix it until
what I typed is what I meant.

> So I ask you this /VERY/ simple question.  Why can't Python do
> both?  Hm? What is wrong with taking 1 + "1", converting the
> "1" to a 1 and adding it together?

There's nothing "wrong" with it.  I just don't want it to
happen. The people responsible for designing the language
apparently didn't want it to happen.  Choose or invent a
language that matches what you want.  Nobody (sane) claims that
a particular language is the best one for everybody or for
every application.

>If it is a string that cannot be converted to an integer, throw
>an exception, otherwise, do it.  Then that preserves the 1 +
>"foo" exception and also does the sane thing of getting rid of
>types when it makes sense to do so.

I don't think it makes sense to do so.  If I explicitly denote
an object as a string (e.g. "1"), then I want it to be a string
and I want it to _stay_ a string.

Maybe I'm just getting old, but there are already enough things
changing in the world.  I don't need my data changing types
behind my back.

>Hell, why do an exception at all?  Why not do what is already
>done with integers, reals and floats?

[...]

>    You already accept:
>
>1 + 1 = integer
>
>1 + 1.2 = float

I may accept it in that I use languages that do that, but I
don't like it.  I think that should be an exception also.  I've
had programs fail because of this.  It is a Bad Thing(tm). This
is my opinion/preference, and it is based on the way I think
and on the types of applications I write and my experiences
debugging them.

>1 + 1j = complex

I'm a bit more ambivalent on that one. From what I remember
from my undergrad circuit analysis classes (many years ago)
calculations involving combinations of simple and complex
numbers are extremely common. However, (here's the important
part) the behavior of such calculations is completely defined,
and there is no potential for loss of information when
promoting a non-complex floating point value to a complex
floating point value. Converting a 1.2345 into 1.2345 + 0.0j
does not loose information.

Converting values from int to float or float to int can loose
information and should _not_ be done automatically by a
language.

> So please don't tell me you don't want "automagic" type
> changing when it is already there and, I'd wager, you use it
> extensively.

It doesn't matter if it is already there, I don't want
"automagic" type changing, and I try not to use it.  When
writing in C, I use generally use lint to make _sure_ I don't
use it.

-- 
Grant Edwards                   grante             Yow!  Of course, you
                                  at               UNDERSTAND about the PLAIDS
                               visi.com            in the SPIN CYCLE --



More information about the Python-list mailing list