But I don't have a nice answer. It's always been possible to worm around
the perceived clunkiness with what, by any objective measure, is very
simple code (albeit perhaps not _obvious_ code, at first; your isnumeric
function fits in here), and no Real Simple extension satisfies.
E.g., sometimes I want to know whether an object is specifically a plain
integer, sometimes any flavor of integer, sometimes any built-in numeric
type, and sometimes just whether the object supports something it _calls_
"subtraction" (the Dates module was interesting in this respect, since
it's both a real stretch to call a Date object a numeric type, and a real
convenience to define _some_ of the numeric operations on Date objects).
Quite some time ago, a certain well-known Python wizard favored me with a
lucid discussion of some of these issues in E-Mail; I'll reproduce part
of that here, without their permission:
> >Ha! You can justify
> > if type(self) is type(other) and other.__class__ is Complex:
> >but you can't call it "wonderfully clear" <grin>. Supposing that "type"
> >returned a unique type for instances of user-defined classes, and that
> >class objects had a __type__ attribute which was the type of instances of
> >that class, _this_ would be wonderfully clear:
> > if type(other) is Complex.__type__:
> >I'm not in love with that notation, I'm just trying to improve your
> >esthetic judgment <wink>.
> [a certain well-known python wizard]
> Yes, but this has the problem with derived classes explained above.
> Maybe I should do it anyway, and also add a simple way to test for
> derived classes.
> [and "the problem with derived classes explained above"]
> Seriously, even though it wouldn't be too hard to introduce a new type
> object for each class, I don't know if this would solve too many
> problems. I can foresee one new problem: derived classes. Naive code
> implementing e.g. complex numbers that tests whether two objects have
> the same type will work until someone derives a trivial class...
> Languages like C++ sidestep this issue by not making classes and types
> first-class objects. The compiler has a number of tests similar to my
> proposed sametype(), e.g. compatible_for_assignment(), etc.
> In Python most tests for type compatibility in the interpreter are
> done on a much more pragmatic basis: the only requirement is that an
> object supports the operations that are actually applied to it. This
> means that you can write "polymorphic" code, which doesn't care
> whether the argument to a function has exactly the right type, as long
> as it supports the proper operations or has the proper attributes.
> For example, it is now possible to substitute a user-defined object
> for sys.stdout; the only requirement of this object is that it has a
> method "write" taking a string argument. In C++, this is impossible:
> a common base class must exist.
> It follows that explicit type checks in Python code should be
> exceptions to the rule, and indeed there is often something fishy
> whenever type() is called. However, for __coerce__ functions as used
> in Complex or Rat there really isn't a serious alternative. I suppose
> a cheap way of testing for a subclass would help a little. This
> doesn't help in case someone creates a "Complex lookalike" but that
> doesn't sound like a big problem in practice (and at least one of the
> types could be taught about the other).
I've since taken the "the only requirement is that an object supports the
operations that are actually applied to it" as a useful design principle,
and found that most of my uses of "type" vanished as a result, and that
the code became more general even while becoming simpler.
A neat example of the latter phenomenon is a rework of the Complex
module: by pruning out overly-protective uses of "type", I got a Complex
module that works fine for Complex numbers whose real & imaginary parts
are arbitrary-precision rationals (or integers, or-- indeed --are Complex
themselves: any objects that "support the operations applied to them").
This is slick! "type" is really needed in only a few places, and half of
those just to worm around the non-coercion of + and * arguments.
So I pretty much stopped worrying about the type system. However, since
I'm the primary consumer of the classes I write, I'm happy with obscure
error msgs from deep in the bowels of the class implementation; another
user would doubtless much rather be told up front "you can't divide a
complex by a dictionary" at the top level, and supplying that
friendliness would put me right back in the "type" business. Messy
> Which leads me to comment on the 2nd half ... One result of the
> experiment was discovering that my attempt at a "string index" for a
> sequence didn't work : the index can only be an int!
There are similar (undocumented but certainly not crazy) constraints on
numeric operations. E.g.,
>>> from Rat import rat
Stack backtrace (innermost last):
File "<stdin>", line 1
TypeError: number coercion failed
Stack backtrace (innermost last):
File "<stdin>", line 1
TypeError: bad operand type(s) for /
Presumably I could trick the first one into working (whatever that might
mean in this context <grin>), but not the latter. I agree it would be a
bit nicer if Python didn't constrain the kinds of objects passed to
__div__, __getslice__, etc. You can _usually_ worm around these
restrictions by disguising the built-in object in a class instance, but I
see that doesn't work for __getslice__.
On the other hand, __getslice__(s,i,j) is documented as adding len(s) to
j if j<0 etc, so there appears to be a good reason for insisting that i &
j be ints (i.e., so that "j<0" and "j+len(s)" make sense!).
y'rs - tim
Tim Peters firstname.lastname@example.org
not speaking for Kendall Square Research Corp