Why return None?

Alex Martelli aleaxit at yahoo.com
Fri Aug 27 12:06:31 EDT 2004


Antoon Pardon <apardon at forel.vub.ac.be> wrote:
   ...
> > Do I really need to point out that a.extend(b) doesn't work for tuples
> > and strings, while a+=b works as polymorphically as feasible on all
> > these types?
> 
> That extend doesn't work for strings and tupples is irrelevant.
> for those types that have an extend a.extend(b) is equivallent
> to a+=b 

It's perfectly relevant, because not all types have extend and += works
para-polymorphically anyway.

> In other words there is no reason to have an extend member for
> lists.

If lists were being designed from scratch today, there would be a design
decision involved: give them a nicely named normal method 'extend' that
is a synonym for __iadd__, so that the callable bound method can be
nicely extracted as 'mylist.extend' and passed around / stored somewhere
/ etc etc, or demand that people wanting to pass the callable bound
method around use 'mylist.__iadd__' which is somewhat goofied and less
readable.  I'm glad I do not have to make that decision myself, but if I
did I would probably tend to err on the side of minimalism -- no
synonyms.

However, lists had extend before += existed, so clearly they kept it for
backwards compatibility.  Similarly, dicts keep their method has_key
even though it later became just a synonym for __contains__, etc etc.

If the point you're trying to make here is that Python chooses to be
constrained by backwards compatibility, keeping older approaches around
as new ones get introduced, I do not believe I ever heard anybody
arguing otherwise.  You may know that at some unspecified time in the
future a Python 3.0 version will be designed, unconstrained by strict
needs of backwards compatibility and specifically oriented to removing
aspects that have become redundant.  Guido has stated so repeatedly,
although he steadfastly refuses to name a precise date.  At that time
every aspect of redundancy will be carefully scrutinized, extend versus
_iadd__ and has_key versus __contains__ being just two examples; any
such redundancy that still remain in Python 3.0 will then have done so
by deliberate, specific decision, rather than due to the requirements of
backwards compatibility.

A "greenfield design", an entirely new language designed from scratch,
has no backwards compatibility constraints -- there do not exist million
of lines of production code that must be preserved.  One can also decide
to develop a language without special regards to backwards
compatibility, routinely breaking any amount of working code, but that
would be more appropriate to a language meant for such purposes as
research and experimentation, rather than for writing applications in.


> furthermore a+=b doesn't work polymorphically feasable because
> the result of
> 
>   a=c
>   a+=b
> 
> is different depending on the type of c.

Indeed it's _para_-polymorphic, exactly as I said, not fully
polymorphic.  In many cases, as you code, you know you have no other
references bound to the object in question, and in that case the
polymorphism applies.

> 
> > It should be pretty obvious, I think.  So, if you want to
> > get an AttributeError exception when 'a' is a tuple or str, a.extend(b)
> > is clearly the way to go -- if you want para-polymorphic behavior in
> > those cases, a+=b.  Isn't it obvious, too?
> 
> No it isn't obvious. No design is so stupid that you can't find
> an example for it's use. That you have found a specific use
> here doesn't say anything.

I can silently suffer MOST spelling mistakes, but please, PLEASE do not
write "it's" where you mean "its", or viceversa: it's the most horrible
thing you can do to the poor, old, long-suffering English language, and
it makes me physically ill.  Particularly in a paragraph as content-free
as this one of yours that I've just quoted, where you're really saying
nothing at all, you could AT LEAST make an attempt to respect the rules
of English, if not of logic and common sense.

On to the substance: your assertion is absurd.  You say it isn't obvious
that a.extend(b) will raise an exception if a is bound to a str or
tuple, yet it patently IS obvious, given that str and tuple do not have
a method named 'extend'.  Whether that's stupid or clever is a
completely different issue, and one which doesn't make your "No it isn't
obvious" assertion any closer to sanity one way or another.


> > in-place mutation.  *IN GENERAL*, the effects of a+=b and a=a+b may
> > differ, though in specific cases ('a' being immutable, or of a mutable
> > type which strangely chooses to define __add__ but not __iadd__) they
> > may be identical.
> 
> which makes them para-polymorphic infeasable.

I don't know what "infeasable" means -- it's a word I cannot find in the
dictionary -- and presumably, if it means something like "unfeasible",
you do not know what the construct 'para-polymorphic' means (it means:
polymorphic under given environmental constraints -- I constructed it
from a common general use of the prefix 'para').


> > Like for a+b vs b+a: in general they may differ, but
> > they won't differ if the types involved just happen to have commutative
> > addition, of if a and b are equal or identical objects, i.e., in various
> > special cases.
> >
> > "You get different effects *in general*" does not rule out that there
> > may be special cases (immutable types for one issue,
> 
> If those specific cases can be half of the total cases I wouldn't
> call the not specific cases *in general*.

There is no sensible way to specify how to count "cases" of types that
have or don't have commutative addition -- anybody can code their own
types and have their addition operation behave either way.  Therefore,
it makes no sense to speak of "half the total cases".

Still, the English expression "in general" is ambiguous, as it may be
used to mean either "in the general case" (that's how it's normally used
in mathematical discourse in English, for example) or "in most cases"
(which is how you appear to think it should exclusively be used).


> The point is that there is a difference between what is obvious in
> general and what is obvious within a certain tradition. If python

Absolutely true: the fact that a cross like + stands for addition is
only obvious for people coming from cultures in which that symbol has
been used to signify addition for centuries, for example.  There is
nothing intrinsical in the graphics of the glyph '+' that makes it
'obviously' mean 'addition'.

> would limit itself to only one obvious way for those things that
> are obvious in general that would be one way.

I cannot think of _any_ aspect of a programming language that might
pertain to 'things that are obvious in general' as opposed to culturally
determined traits -- going left to right, using ASCII rather than other
alphabets, using '+' to indicate addition, and so on, and so forth.
Please give examples of these 'things that are obvious in general' where
you think Python might 'limit oneself to only one obvious way'.

> But here you are defending one thing that is only obvious through
> tradition, by pointing out that something that hasn't had the time
> to become a tradition isn't obvious.

When there is one operator to do one job, _in the context of that set of
operator_, it IS obviously right to use that operator, rather than using
two operators which, combined, give the same effect.  I claim that, no
matter what symbols you use to represent the operators, "TO a, ADD b" is
'more obvious' than "FROM a, SUBTRACT the NEGATIVE of b", because the
former requires one operator, binary ADD, the latter requires two,
binary SUBTRACT and unary NEGATIVE.  I do not claim that this is
necessarily so in any culture or tradition whatsoever: I do claim it is
true for cultures sufficiently influenced by Occam's Razor, "Entities
are not to be multiplied beyond necessity", and that the culture to
which Python addresses itself is in fact so influenced.  ((If you feel
unable to relate to a culture influenced by Occam's Razor, then it is
quite possible that Python is in fact not suitable for you)).


> Personly I don't find the use of "+" as a concat operator obvious.
> There are types for which both addition and concatenation can be
> a usefull operator. Using the same symbol for both will make
> it just that much harder to implement such types and as a result
> there is no obvious interface for such types.

True, if I were designing a language from scratch I would surely
consider the possiibly of using different operators for addition and
concatenation, and, similarly, for multiplication and repetition --
there are clearly difficult trade-offs here.  On one side, in the future
I may want to have a type that has both addition and concatenation
(presumably a numeric array); on the other, if concatenation is a
frequent need in the typical use cases of the language it's hard to
think of a neater way to express it than '+, in this culture (where, for
example, PL/I's concatenation operator '||' has been appropriated by C
to mean something completely different, 'logical or' -- now, using '||'
for concatenation would be very confusing to a target audience that is
more familiar with C than with PL/I or SQL...).  Any design choice in
the presence of such opposite constraints can hardly be 'obvious' (and
in designing a language from scratch there is an enormously high number
of such choices to be made -- not independently from each other,
either).

But note that the fact that choosing to use the same operator was not an
_obvious_ choice at the time the language was designed has nothing to do
with the application of the Zen of Python point to 'how to concatenate
two strings'.  Python _is_ designed in such a way that the task "how do
I concatenate the strings denoted by names a and b" has one obvious
answer: a+b.  This is because of how Python is designed (with + between
sequences meaning concatenation) and already-mentioned cultural aspects
(using a single operator that does job X is the obvious way in a culture
influenced by Occam's Razor to do job X).  All alternatives require
multiple operations ('JOIN the LIST FORMED BY ITEMS a and b' -- you have
to form an intermediate list, or tuple, and then join it, for example)
and therefore are not obvious under these conditions.  This is even
sometimes unfortunate, since
   for piece in makepieces(): bigstring += piece
is such a performance disaster (less so in Python 2.4, happily!), yet
people keep committing it because it IS an "attractive nuisance" -- an
OBVIOUS solution that is not the RIGHT solution.  That it's obvious to
most beginners is proven by the fact that so many beginners continue to
do it, even though ''.join(makepieces()) is shorter and faster.  I once
hoped that sum(makepieces()) could field this issue, but Guido decided
that having an alternative to ''.join would be bad and had me take the
code out of 'sum' to handle string arguments .  Note that I do not
_whine_ about it, even though it meant giving up both one of my pet
ideas _and_ some work I had already done, rather I admit it's his
call... and I use his language rather than making my own because over
the years I've learned that _overall_ his decisions make a better
language than mine would, even though I may hotly differ with him
regarding a few specific decisions out of the huge numbers needed to
build and grow a programming language.

If I didn't think that, I wouldn't use Python, of course: besides the
possibility of making my own languages, there are many truly excellent
very high level languages to choose among -- Lisp, Smalltalk, Haskell,
ML of many stripes, Erlang, Ruby.  I think I could be pretty happy with
any of these... just not quite as happy as I am with Python, therefore
it is with Python that I stick!


> >> If someway of doing it
> >> is usefull then I think it should be included and the fact that
> >> it introduces more than one obvious way to do some things shouldn't
> >> count for much.
> >
> > This is exactly Perl's philosophy, of course.
> 
> No it isn't. Perl offers you choice in a number of situations
> where a number of the alternatives don't offer you anything usefull.
> unless a way to do things differently and eliminate a few characters.

And for some people eliminating some characters is very important and
makes those alternatives preferable and useful to them, according to
their criteria.


> >> Sure you shouldn't go the perl-way where things seemed to have
> >> been introduced just for the sake of having more than obvious way
> >> to do things. But eliminating possibilities (method chaining)
> >> just because you don't like them and because they would create
> >> more than one obvious way to do things, seems just as bad to
> >> me.
> >
> > If a language should not eliminate possibilities because its designer
> > does not like those possibilities, indeed if it's BAD for a language
> > designer to omit from his language the possibilities he dislikes, what
> > else should a language designer do then, except include every
> > possibility that somebody somewhere MIGHT like?
> 
> So if you think it is good for a language designer to omit what
> he dislikes. Do you think it is equally good for a language
> designer to add just because he likes it. And if you think so,
> do you think the earlier versions of perl, where we can think
> the language was still mainly driven by what Larry Wall liked,
> was a good language.

Do you know how to use the question mark punctuation character?  It's
hard to say whether you're asking questions or making assertions, when
your word order suggests one thing and your punctuation says otherwise.

"You know a design is finished, not when there is nothing left to add,
but when there is nothing left to take away" (Antoine de Saint Exupery,
widely quoted and differently translated from French).  There is no
necessary symmetry between adding features and avoiding them.

But sure, it's a designer's job to add what he likes and thinks
necessary and omit what he dislikes and thinks redundant or worse.  I
met Perl when Perl was at release 3.something, and by that time it was
already florid with redundancy -- I believe it was designed that way
from the start, with "&foo if $blah;" and "if($blah) {&foo;}" both
included because some people would like one and others would like the
other, 'unless' as a synonym of 'if not' for similar reasons, etc, etc,
with a design principle based on the enormous redundancy of natural
language (Wall's field of study).  ((However, I have no experience with
the very first few releases of Perl)).  At the time when I met Perl 3 I
thought it was the best language for my needs under Unix given the
alternatives I believed I had (sh and its descendants, awk -- Rexx was
not available for Unix then, Python I'd never heard of, Lisp would have
cost me money, etc, etc), which is why I used it for years (all the way
to Perl 4 and the dawn of Perl 5...) -- but, no, I never particularly
liked its florid redundancy, its lack of good data structures (at the
time, I do understand the current Perl is a bit better there!), and the
need for stropping just about every identifier.  Why do you ask?  I do
not see the connection between my opinion of Perl and anything else we
were discussing.

> 
> I can understand that a designer has to make choices, but
> if the designer can allow a choice and has no other arguments
> to limit that choice than that he doesn't like one alternative
> then that is IMO a bad design decision.

Ah, you're making a common but deep mistake here: the ability to DO
something great, and the ability to explain WHY one has acted in one way
or another in the process of doing that something, are not connected.

Consider a musician composing a song: the musician's ability to choose a
sequence of notes that when played will sound just wonderful is one
thing, his ability to explain WHY he's put a Re there instead of a Mi is
quite another issue.  Would you say "if a musician could have used a
note and has no other arguments to omit that note than that he doesn't
like it then than is a bad music composition decision"?  I think it's
absurd to infer, from somebody's inability to explain a decision to your
satisfaction, _or at all_, that the decision is bad.

"Those who can, do, those who can't, explain" may come closer (except
that there _are_ a few musicians, language designers, architects, and
other creative types, who happen to be good at both doing and
explaining, but they're a minority, I believe).

I've never made any claim about Guido's skill as an explainer or
debater, please note.  I do implicitly claim he's great at language
design, by freely choosing to use the language he's designed when there
are so many others I could just as freely choose among.  (_Your_ use of
Python, on the other hand, is obviously totally contradictory with your
opinion, which you just expressed, that it's a horribly badly designed
language, since its designer is not good at argumenting about each and
every one of the uncountable decisions he's made -- to disallow
possibility a, possibility b, possibility c, and so on, and so forth).


> >   target = mydict[foo].bar[23].zepp
> >   target.pop(xu1)
> >   target.sort()
> >   target.pop(xu3)
> >   target.reverse()
> >   target.pop(xu7)
> 
> I find this a questionable practice. What if you need to make the list
> empty at some time. The most obvious way to do so after a number of
> such statements would be:
> 
>    target = []
> 
> But of course that won't work.

That would be 'obvious' only to someone so totally ignorant of Python's
most fundamental aspects that I _cringe_ to think of that someone using
Python.  By just asserting it would be obvous you must justify serious
doubts about your competence in Python use.

Assigning to a bare name NEVER mutates the object to which that name
previously referred to, if any.  NEVER.

Therefore, thinking of assigning to a bare name as a way of mutating an
object is not obvious -- on the contrary, it's absurd, in Python.

One obvious way is:

   target[:] = []

"assigning to the CONTENTS of the object" does mutate it, and this works
just fine, of course.  Unfortunately there is another way, also obvious:

  del target[:]

"deleting the CONTENTS of the object".  This will also work just fine.
Alas, it's only _preferable_ that the obvious way be just one, and we
cannot always reach the results we would prefer.

So, your assertion that this is a questionable practice proves
untenable.  But then, as this thread shows, _most_ of your assertions
are untenable, so you're clearly comfortable with the fact.   I guess it
goes well with freely choosing to use a language which you consider so
badly designed!


> If a language goes so far as to make a particular coding impossible
> while that would have been the prefered coding style for most of
> the project members then such a limitation can hinder the decision
> to agree upon a certain style instead of helping.

And in this case the team should definitely choose another language,
just like you should do instead of wasting your time using Python, and
yours AND ours whining against it.


> I also think this attitude is appalling. Python is for consenting
> adults I hear. But that doesn't seem to apply here, as python
> seems to want to enforce a certain coding style instead of
> letting consenting adults work it out among themselves.

Python most definitely does not multiply entities beyond necessity in
order to allow multiple equivalent coding styles -- it's that Occam
Razor thing again, see.  If a team wants enormous freedom of design,
short of designing their own language from scratch, they can choose
among Lisp, Scheme, Dylan -- all good languages with enormously powerful
MACRO systems which let you go wild in ways languages without macros
just can't match.  Of course, it's likely that nobody besides the
original team can maintain their code later -- that's the flip side of
that freedom... it can go all the way to designing your own language,
and who else but you will know it so they can advise, consult, maintain,
and so on, once you choose to avail yourself of that freedom?

Python values uniformity -- values the ability of somebody "from outside
the team" read the code, advise and consult about it, and maintain it
later, higher than it values the possibility of giving the team *yet
another way* to design their own language... why would you NEED another
Lisp?  There IS one, go and use THAT (or if you can't stand parentheses,
use Dylan -- not far from Lisp with different surface syntax after all).

I also appreciate this uniformity highly -- it lets me advise and
consult all manners of teams using Python, it makes my books and courses
and presentations more useful to them, it lets me turn for advice and
consultancy to the general community regarding my own projects and
teams, all without difficulty.  What could possibly be "appalling" in
not wanting to be yet another Lisp, yet another Perl, and so on?!  Why
shouldn't there be on this Earth ONE language which makes software
maintenance easier, ONE language which care more about the ease of
reading others' code than about the ease of writing that code?!  Now
THAT absolutism, this absurd attitude of yours that wants to wipe out
from the face of the Earth the ONLY language so focused on uniformity,
egoless and ownerless code, community, maintenance, etc, to turn it into
yet another needless would-be clone of Lisp, Perl, etc... *THAT* is
truly appalling indeed!


> > Great, so, I repeat: go away and design your language, one that WILL
> > impress you with its design.  Here, you're just waiting your precious
> > time and energy, as well of course as ours.
> 
> That you waste yours, is entirly your choice, nobody forces your hand
> to reply to me.

Absolutely my choice, of course.  But I have a reasonable motivation,
which I have already explained: there may be other readers which would
be ill-served by leaving your untenable assertions, etc etc,
unchallenged, when those assertions &c are so easy to tear into small
bloody pieces and deserve nothing better.

YOUR motivation for using a language you consider badly designed, one
whose underlying culture you find APPALLING (!your choice of word!), and
then spending your time spewing venom against it, is, on the other hand,
totally mysterious.  


> > Practicality beats purity: needing to polymorphically concatenate two
> > sequences of any kind, without caring if one gets modified or not, is a
> > reasonably frequent need and is quite well satisfied by += for example.
> 
> It isn't. Either you know what types the variables are and then
> using a different operator depending on the type is no big deal,
> or you don't know what type the variables are and then not caring
> if one gets modified or not, is a disaster in waiting.

Your assertion is, once again, totally false and untenable.

def frooble(target, how_many, generator, *args, **kwds):
    for i in xrange(how_many):
        target += generator(i, *args, **kwds)
    return target

Where is the "disaster in waiting" here?  The specifications of
'frooble' are analogous to those of '+=': if you pass it a first
argument that is mutable it will extend it, otherwise it obviously
won't.


Alex



More information about the Python-list mailing list