advice on programming style: is multiple inheritance bad?
Roy Smith
roy at panix.com
Sun Feb 1 14:25:31 EST 2004
In article <bvjhkv$ss4$1 at news.rz.uni-karlsruhe.de>,
Uwe Mayer <merkosh at hadiko.de> wrote:
> Hi,
>
> I got a class A2 that needs to inherit from class A1.
> A2, as well as some other classes implement a same functionality and in
> order to prevent code duplication I'd like to factorize that (potentially)
> duplicate code into a class SuperA from which then A2 and all other classes
> could inherrit.
> However, this would cause A2 to have two super classes A1 and SuperA - which
> is possible of course in Python.
>
> My question is: is that bad programming style?
> From C++ and Java we "learn" that you shouldn't do that.
C++ allows multiple inheritence, Java does not.
Whether multiple inheritence is bad or not is a matter of opinion. It
certainly can lead to complexities if the namespaces of the superclasses
overlap. To be honest, I don't see how Java's use of interfaces really
solves that problem; it just moves it somepwhere else.
Some people advocate that if you are going to use multiple inheritence,
you should think of it as one "main" superclass, and the rest of them
"mix-in" classes which add functions (think Decorator pattern).
On the other hand, sometimes you've got two orthogonal concepts that it
makes sense to represent as two inheritence trees using multiple
inheritence. For example, if you were classifying animals, you might
have 5 subclasses of vertibrate: fish, amphibian, bird, reptile, and
mammal. But, you might also want to have classes to represent modes of
locomotion: swimmer, walker, and flyer. This might lead to such classes
as:
class whale (mammal, swimmer)
class penguin (bird, walker, swimmer)
class bat (mammal, walker, flyer)
class flyingFish (fish, swimmer, flyer)
class flounder (fish, swimmer)
class africanSparrow (bird, walker, flyer).
whale.greatestDepth() makes sense, but bat.greatestDepth() does not.
Likewise, africanSparrow.unladenAirspeed(), but not
penguin.unladenAirspeed().
Is that bad programming style? If it turns out to be the simpliest way
to accurately represent the objects you're working with in the problem
domain you're trying to solve, I don't see why it should be considered
bad style.
This is an area where religious beliefs come into play. I'm sure you
will find people who would recoil violently at the idea. Most of them
probaby grew up on Java :-)
More information about the Python-list
mailing list