Proposal: default __init__

Alex Martelli aleaxit at yahoo.com
Thu Nov 16 05:13:07 EST 2000


<shindich at my-deja.com> wrote in message news:8uvtr8$lh5$1 at nnrp1.deja.com...
> One more reason not to use inheritence! Factories and containment rule!

Inheritance of implementation does have its problems, yes.

Specifically, it _strongly_ couples the inheriting component
to the component implementing the baseclasses.  The strength
of this coupling can be a real problem in terms of Meyer's
"Open/Closed Principle" and other key guidelines of OO
design (which I find tend to be extremely well-presented in
the works of R. Martin, cfr, for example:
http://axaonl.cern.ch:8000/a_doc$www/online/CXX_SEGREGATION-PRINCIPLE.HTML)
[I tend to use "component" where Martin uses "category" --
see his "Principle 5, Reuse/Release Equivalency Principle"].

Often, one can indeed apply Martin's "dependency inversion
principle" (which, in this case and many others, will in
practice tend to produce containment+delegation through
abstract-interfaces whose implementations are instantiated
via factories) to loosen the coupling and gain the resulting
advantages.


*However*...


There are many cases in which inheritance of implementation
turns out, despite its (important) "theoretical flaws", to
be *VERY* handy _in practical use_.  This is especially so
in the context of a language which promotes dynamic and fluid
development and deployment -- and Python truly shines here.

If [A] your base-classes ARE intrinsically far stabler than
your inheriting-classes, or [B] your development uses a
methodology that hinges on frequent refactoring, obsessive
testing, and "do the simplest thing that could possibly
work" (in other words, if it's not all that far from the
ideas of Extreme Programming:-), then the actual effects of
the "theoretical flaws" are minimized, and you can still
enjoy the practical benefits.

Inheritance of implementation is at its worst if the base
classes are NOT "intrinsically far stabler" than the client-
code inheriting from them, *AND* you're unwilling or unable
to retest and refactor at every release of the 'bases' (or
you don't trust your tests to be obsessive and extensive
enough...).  No dependency of stable code on unstable is
*ever* a _good_ thing, mind you, but you can keep it under
some control IF you're working in some kind of XP-like
setting.  If you can't or won't do that, then I do agree
with you that you should at least strength-reduce (and
factor-out) the dependency via containment &c &c.

Dreaming of stable, "Open/Closed" components inheriting
from unstable and changeable ones -- without continuous
refactoring, and without substantial explicit investment
in dependency-control -- is a pipedream.  On that, I think
we can agree!-)  I'm just trying to point out that there
IS a case for a different development-process style... and
in THAT context, implementation inheritance's extreme
handiness often makes it a very good choice to express
the "current factoring du jour" of functionality among
components (and among classes within components).  (Fowler's
"Refactoring", Addison-Wesley, is a rich and readable
repertory of elementary-refactoring techniques that can
help in any development-process, but will be particularly
useful in the context of an XP-like, highly-dynamic one).


As a side note -- Python is *exceptionally* good at
supporting just about whatever style of development
process you choose (with the notable exception of
static, compile-time type checks... it won't do THAT:-).


Something that many OO languages don't support with
inheritance, for example, but only with containment
and delegation, is dynamical changing on the fly, at
runtime, of the class one is delegating to (either
implicitly via inheritance, or explicitly within
containment).  So, one criterion of choice between
inheriting and containing, in those languages, is
"will I ever need to change an object's delegatee
on the fly at runtime?  If so, then inheritance is
not an option; if not, making the delegatee a base
of the object remains a possibility".

Python gives you no such constraint: you *can*
change the classes (and bases) of an object at
runtime!  Inheritance thus remains a possibility
even in cases where such dynamic changes may be
needed.


And on the other side of the coin -- in many
languages, a downside of contain+delegate is
the amount of boilerplate you have to write.

If object A wants to delegate several methods
to sub-object B, it must, for each of them, have
an explicitly written 'stub' which collects
the arguments and forwards the call, if B is
contained, while no such chore is needed if
B is a base.  Thus, a push to inherit, rather
than contain-and-delegate.

In Python, thanks to __getattr__ and __setattr__,
things are much rosier... you CAN write your
delegations in them, and even easily 'automate'
them, by just defining in your objects a few
correspondence tables -- "if somebody asks for
THIS attribute here, delegate the request to
THAT contained-object".  You can reuse __getattr__
from a suitable mixin-class that tailors itself
to your specific needs through such tables...!


*Python gives you more options* -- it makes both
inheritance, AND containment-and-delegation,
handier and niftier than many other OO languages,
if you want them to be.  Not only it _enables
but does not *force*_ you to use OO at all in
any given situation -- it's just as empowering
in terms of _which_ OO style (inherit or contain)
you may choose to use for solutions...!


Alex





More information about the Python-list mailing list