Why return None?

Antoon Pardon apardon at forel.vub.ac.be
Fri Aug 27 09:08:02 EDT 2004


Op 2004-08-27, Alex Martelli schreef <aleaxit at yahoo.com>:
> Antoon Pardon <apardon at forel.vub.ac.be> wrote:
>    ...
>> > Yes you can, and in the general case get very different effects, e.g.:
>> 
>> And what about
>> 
>>   a += b    vs   a.extend(b)
>
> I can go on repeating "in the general case [these constructs] get very
> different effects" just as long as you can keep proposing, as if they
> might be equivalent, constructs that just aren't so in the general case.
>
> Do I really need to point out that a.extend(b) doesn't work for tuples
> and strings, while a+=b works as polymorphically as feasible on all
> these types?

That extend doesn't work for strings and tupples is irrelevant.
for those types that have an extend a.extend(b) is equivallent
to a+=b 

In other words there is no reason to have an extend member for
lists.

furthermore a+=b doesn't work polymorphically feasable because
the result of

  a=c
  a+=b

is different depending on the type of c.

> It should be pretty obvious, I think.  So, if you want to
> get an AttributeError exception when 'a' is a tuple or str, a.extend(b)
> is clearly the way to go -- if you want para-polymorphic behavior in
> those cases, a+=b.  Isn't it obvious, too?

No it isn't obvious. No design is so stupid that you can't find
an example for it's use. That you have found a specific use
here doesn't say anything.

>> >>>> c=a=range(3)
>> >>>> b=range(2)
>> >>>> a+=b
>> >>>> c
>> > [0, 1, 2, 0, 1]
>> >
>> > versus:
>> >
>> >>>> c=a=range(3)
>> >>>> b=range(2)
>> >>>> a=a+b
>> >>>> c
>> > [0, 1, 2]
>> 
>> I wouldn't say you get different effects in *general*. You get the
>> same effect if you use numbers or tuples or any other immutable
>> object.
>
> a+=b is defined to be: identical to a=a+b for immutable objects being
> bound to name 'a'; but not necessarily so for mutable objects -- mutable
> types get a chance  to define __iadd__ and gain efficiency through
> in-place mutation for a+=b, while the semantics of a=a+b strictly forbid
> in-place mutation.  *IN GENERAL*, the effects of a+=b and a=a+b may
> differ, though in specific cases ('a' being immutable, or of a mutable
> type which strangely chooses to define __add__ but not __iadd__) they
> may be identical.

which makes them para-polymorphic infeasable.

> Like for a+b vs b+a: in general they may differ, but
> they won't differ if the types involved just happen to have commutative
> addition, of if a and b are equal or identical objects, i.e., in various
> special cases.
>
> "You get different effects *in general*" does not rule out that there
> may be special cases (immutable types for one issue,

If those specific cases can be half of the total cases I wouldn't
call the not specific cases *in general*.

> commutative-addition types for another, etc, etc) in which the effects
> do not differ.  Indeed, if it was "always" true that you got different
> effects, it would be superfluous to add that "in general" qualifier.
> Therefore, I find your assertion that you "wouldn't say you get
> different effects in *general*" based on finding special cases in which
> the effects do not differ to be absurd and unsupportable.
>
>
>> > But let's be sensible: if 'it' is joining two strings which are bound to
>> > names b and c, b+c is the only OBVIOUS way to do it.  Building a
>> > sequence whose items are b and c and calling ''.join on it is clearly an
>> > indirect and roundabout -- therefore NOT "the one obvious way"! -- to
>> > achieve a result.  Proof: it's so unobvious, unusual, rarely used if
>> > ever, that you typed entirely wrong code for the purpose...
>> 
>> That is just tradition. Suppose the "+" operator wouldn't have worked
>> on strings an concatenating would from the start been done by joining,
>> then that would have been the one obvious way to do it.
>
> In a hypothetical language without any + operator, but with both unary
> and binary - operators, the one "obvious" way to add two numbers a and b
> might indeed be to code:  a - (-b).  So what?  In a language WITH a
> normal binary + operator, 'a - (-b)' is nothing like 'an obvious way'.

The point is that there is a difference between what is obvious in
general and what is obvious within a certain tradition. If python
would limit itself to only one obvious way for those things that
are obvious in general that would be one way.

But here you are defending one thing that is only obvious through
tradition, by pointing out that something that hasn't had the time
to become a tradition isn't obvious.

Personly I don't find the use of "+" as a concat operator obvious.
There are types for which both addition and concatenation can be
a usefull operator. Using the same symbol for both will make
it just that much harder to implement such types and as a result
there is no obvious interface for such types.


>> > We can't always get even that, as a+b vs
>> > b+a show when a and b are bound to numbers, but we can sure get closer
>> > to it by respecting most of GvR's design decisions than by offering
>> > unfounded, hasty and badly reasoning critiques of them.
>> 
>> I think that this goal of GvR is a bad one. 
>
> I'm sure you're a better language designer than GvR, since you're
> qualified to critique, not just a specific design decision, but one of
> the pillars on which he based many of the design decisions that together
> made Python.


> Therefore, I earnestly urge you to stop wasting your time critiquing an
> inferiorly-designed language and go off and design your own, which will
> no doubt be immensely superior.  Good bye; don't slam the door on the
> way out, please.
>
>> If someway of doing it
>> is usefull then I think it should be included and the fact that
>> it introduces more than one obvious way to do some things shouldn't
>> count for much.
>
> This is exactly Perl's philosophy, of course.

No it isn't. Perl offers you choice in a number of situations
where a number of the alternatives don't offer you anything usefull.
unless a way to do things differently and eliminate a few characters.

>> Sure you shouldn't go the perl-way where things seemed to have
>> been introduced just for the sake of having more than obvious way
>> to do things. But eliminating possibilities (method chaining)
>> just because you don't like them and because they would create
>> more than one obvious way to do things, seems just as bad to
>> me.
>
> If a language should not eliminate possibilities because its designer
> does not like those possibilities, indeed if it's BAD for a language
> designer to omit from his language the possibilities he dislikes, what
> else should a language designer do then, except include every
> possibility that somebody somewhere MIGHT like?

So if you think it is good for a language designer to omit what
he dislikes. Do you think it is equally good for a language
designer to add just because he likes it. And if you think so,
do you think the earlier versions of perl, where we can think
the language was still mainly driven by what Larry Wall liked,
was a good language.

I can understand that a designer has to make choices, but
if the designer can allow a choice and has no other arguments
to limit that choice than that he doesn't like one alternative
then that is IMO a bad design decision.

>> What I have herad about the decorators is that one of the
>> arguments in favor of decorators is, that you have to
>> give the name of the function only once, where tradionally
>> you have to repeat the function name and this can introduce
>> errors.
>> 
>> But the same argument goes for allowing method chaining.
>> Without method chaining you have to repeat the name of
>> the object which can introduce errors. 
>
> I've heard that argument in favour of augmented assignment operators
> such as += -- and there it makes sense, since the item you're operating
> on has unbounded complexity... mydict[foo].bar[23].zepp += 1 may indeed
> be better than repeating that horrid LHS (although "Demeter's Law"
> suggests that such multi-dotted usage is a bad idea in itself, one
> doesn't always structure code with proper assignment of responsibilities
> to objects and so forth...).
>
> For a plain name, particularly one which is just a local variable and
> therefore you can choose to be as simple as you wish, the argument makes
> no sense to me.  If I need to call several operations on an object I'm
> quite likely to give that object a 'temporary alias' in a local name
> anyway, of course:
>   target = mydict[foo].bar[23].zepp
>   target.pop(xu1)
>   target.sort()
>   target.pop(xu3)
>   target.reverse()
>   target.pop(xu7)

I find this a questionable practice. What if you need to make the list
empty at some time. The most obvious way to do so after a number of
such statements would be:

   target = []

But of course that won't work.

>> >> The difference between
>> >> 
>> >>   print somelist.sort()
>> >> 
>> >> and
>> >> 
>> >>   somelist.sort()
>> >>   print somelist
>> >> 
>> >> 
>> >> is IMO of the same order as the difference between
>> >> 
>> >> 
>> >>   print a + b
>> >> 
>> >> and
>> >> 
>> >>   r = a + b
>> >>   print r
>> >
>> > For a sufficiently gross-grained comparison, sure.  And so?  In the
>> > second case, if you're not interested in having the value of a+b kept
>> > around for any subsequent use, then the first approach is the one
>> > obvious way;
>> 
>> No it isn't because programs evolve. So you may think you don't
>> need the result later on, but that may change, so writing it
>> the second way, will making changes easier later on.
>
> Ridiculous.  Keep around a+b, which for all we know here might be a
> million-items list!, by having a name bound to it, without ANY current
> need for that object, because some FUTURE version of your program may
> have different specs?!
> If specs change, refactoring the program written in the sensible way,
> the way that doesn't keep memory occupied to no good purpose, won't  be
> any harder than refactoring the program that wastes megabytes by always
> keeping all intermediate results around "just in case".

One could argue that this is again just an deficiency of pythons
implementation, that can't optimise the code in such a way so that
unused variables will have there memory released.

>> > if you ARE, the second, because you've bound a name to it
>> > (which you might have avoided) so you can reuse it (if you have no
>> > interest in such reuse, it's not obvious why you've bound any name...).
>> >
>> > In the first case, fortunately the first approach is illegal, the second
>> > one is just fine.  Were they exactly equivalent in effect neither would
>> > be the one obvious way for all reasonable observer -- some would hate
>> > the side effect in the first case, some would hate the idea of having
>> > two statements where one might suffice in the second case.
>> 
>> So? I sometimes get the idea that people here can't cope with
>> differences in how people code. So any effort must be made
>> to force people to code in one specific way.
>
> When more than one person cooperates in writing a program, the group
> will work much better if there is no "code ownership" -- the lack of
> individualized, quirky style variations helps a lot.  It's not imposible
> to 'cope with differences' in coding style within a team, but it's just
> one more roadblock erected to no good purpose.  A language can help the
> team reach reasonably uniform coding style (by trying to avoid offering
> gratuitous variation which serves no real purpose), or it can hinder the
> team in that same goal (by showering gratuitous variation on them).

If a language goes so far as to make a particular coding impossible
while that would have been the prefered coding style for most of
the project members then such a limitation can hinder the decision
to agree upon a certain style instead of helping.

I also think this attitude is appalling. Python is for consenting
adults I hear. But that doesn't seem to apply here, as python
seems to want to enforce a certain coding style instead of
letting consenting adults work it out among themselves.


>> > Fortunately the first approach does NOT do the same thing as the second
>> > (it prints out None:-) so Python sticks to its design principles.  Let
>> > me offer a private libation to whatever deities protect programmers,
>> > that Python was designed by GvR rather than by people able to propose
>> > analogies such as this last one without following through on all of
>> > their implications and seeing why this SHOWS Python is consistent in
>> > applying its own design principles!
>> 
>> That these implications are important is just an implication on the
>> design principles. If someone doesn't think particular design principles
>> are that important, he doesn't care that if somethings is changed that
>> particulat design principle will be violated. Personnaly I'm not
>> that impressed with the design of python, it is a very usefull language
>
> Great, so, I repeat: go away and design your language, one that WILL
> impress you with its design.  Here, you're just waiting your precious
> time and energy, as well of course as ours.

That you waste yours, is entirly your choice, nobody forces your hand
to reply to me.

>> but having operators like '+=' which have a different kind of result
>> depending on whether you have a mutable or immutable object is IMO
>> not such a good design and I wonder what design principle inspired
>> them.
>
> Practicality beats purity: needing to polymorphically concatenate two
> sequences of any kind, without caring if one gets modified or not, is a
> reasonably frequent need and is quite well satisfied by += for example.

It isn't. Either you know what types the variables are and then
using a different operator depending on the type is no big deal,
or you don't know what type the variables are and then not caring
if one gets modified or not, is a disaster in waiting.

-- 
Antoon Pardon



More information about the Python-list mailing list