total idiot question: +=, .=, etc...

Reimer Behrends behrends at cse.msu.edu
Tue Jun 29 00:33:38 EDT 1999


Tim Peters (tim_one at email.msn.com) wrote:
[...]
> Let me ammend that, to "any OO language with declarations <that isn't
> Eiffel> has to ..." <wink>.  Eiffel allows inherited features to be accessed
> under local alias names as if the original name never existed (its "rename"
> clauses), so it has *a* way to recover locally from trivial superclass name
> changes.  Java (& most other languages) take a different approach.

These are two different things, actually. If you forbid shadowing of
instance variables by local variables, then simply renaming the local
variable in the subclass can fix the problem. This is applicable to
any language.

The related problem of adding a new method or attribute to a superclass
that already exists in a subclass, while most easily solved by a renaming
mechanism, is also amenable to a minor rewrite of the subclass. I am
assuming, of course, that you have a system where the compiler notifies
you if you accidentally redefine a method.

> > Yes, changing stuff in a superclass can thus break code in a subclass; but
> > this can always happen (like if you add a new method that also exists in
> > a subclass).  Doing a major internal change to a superclass requires at
> > least rechecking subclasses, anyway.
> 
> I did say "trivial".

Adding methods or instance variables to a superclass is never "trivial".
It always has the potential of breaking existing subclasses.

> A purely local way of addressing this is vital:  the
> subclass author may not be able to change the superclass code, or may not
> even have read access to its source code; in the other direction, a
> superclass author may have no idea who subclasses from them.

Yes. This is why changes to a superclass are always critical. However,
a well-designed subclass should always be able to recover by simply
changing the names of conflicting entities. It will always entail a
change to the subclass, however.

[...]
> >> Readability is more important to me over the long haul.
> 
> > Huh? Where have I said otherwise? I just happen to think that these two
> > goals are not mutually exclusive. And there should be a better solution
> > than the language ordaining a strange kind of Hungarian notation.
> 
> The C++ conventions I mentioned were a strange kind of Hungarian;

I am aware of that. That's why I suggested that the "self." notation is
not too much different. Because more often than not it _is_ entirely
clear from the context what you are doing. And in these cases the
required qualification is needless visual overhead, just like Hungarian
notation (in fact, it is probably even worse).

> "x.y" is
> just the way Python *always* spells "give me the value of attribute y in x's
> namespace", whether x is a module, class or instance object.  Consistent,
> explicit & obvious, given Python's modeling of modules, classes and
> instances *as* namespaces.

Then why do we have "from module import name"? :)

The point I am trying to make is that sometimes making a qualification is
redundant, if the context is already known. In this case the unambiguity
of the reference comes at the cost of cluttering the code with extraneous
tokens. This is exacerbated by the fact that "self" does not stand out
and that you easily get more tokens per line than your short-term memory
is comfortable with.

Come to think of it, why don't we require that _local_ variables be
qualified, too? For instance:

	for local.item in local.list:
	  local.sum = local.sum + local.item

It is, I think, the assumption that local variables are the default
case; that _most_ of the time you operate on local variables, and
you don't want to clutter your code. But in an OO language that
assumption no longer is true; you are going to operate on instance
variables just as often. The same reasoning that leads to local
variables not being qualified can be applied to methods operating
on instance variables.

For instance, I noticed that I much preferred the way Ruby denotes
attributes, even though it uses a superficially similar approach (Ruby
prefixes attributes with an "@" symbol rather than with "self."). So,
for instance, instead of

	self.last = self.lines[-1]
	self.lines = self.lines[:-1]

we have something like:

	@last = @lines[-1]
	@lines = @lines[:-1]

While I'm not that happy with this kind of notation, either, I find it
much easier to pick out what exactly is going on.

Ideally, of course, the code would look like:

	last = lines[-1]
	lines = lines[:-1]

and it should be made obvious from the context whether last/lines are
local or instance variables.

[Perl's prefix symbols.]
> Seriously, languages rarely start out doing things that are insane within
> their own worldview.  Perl would be *much* harder to work with in the
> absence of those $@%& thingies!

I am fully aware of this and I agree that they are a very necessary evil
in Perl. The point, however, remains that many people do regard Perl as
being less legible because of this.

[...]

> So the decorations solve a problem in Perl that Python doesn't have, so they
> make good sense in Perl but not in Python.

Pardon me, but I disagree here; these symbols provide annotations that
tell you which type a certain variable has. Since Python does not allow
you to declare that a variable is of a certain type, you have to rely
on conventions to ensure that there is no misunderstanding. In a mailing
list manager, is "list" really a list, or instead a string that contains
the name of the mailing list?

But apparently this isn't a big problem, which implies that it is
possible to deal with the absence of declarations in a way that doesn't
require you to qualify each and every use of an entity and still come
up with maintaineable code.

> Python's object.attr notation
> solves Python's problem:  with respect to which namespace is attr to be
> resolved?  Christian Tismer once suggested that Python take that another
> step, and require that e.g.
> 
>     def f(x, y):
>         global sum1, sum2, total
>         sum1 = x
>         sum2 = y
>         total = x + y
> 
> instead be written
> 
>     def f(x, y):
>         global.sum1 = x
>         global.sum2 = y
>         global.total = x + y
> 
> I think that's a fine idea too.  I'd even go a step toward Perl and make
> mapping objects use "{}" for indexing instead of "[]".  Stop me before I sin
> <wink>.

I would prefer to go into the opposite direction, actually. Using
something like global to denote attributes. For instance:

	attr last, lines

	last = lines[-1]
	lines = lines[:-1]

			Reimer Behrends




More information about the Python-list mailing list