[Python-ideas] `issubclass` shouldn't be raising exceptions for non-type inputs
Steven D'Aprano
steve at pearwood.info
Sun Dec 5 02:16:37 CET 2010
Cameron Simpson wrote:
> The try/except verision goes like this:
>
> try:
> x = y / obj.foo
> except ZeroDivisionError:
> # handle 0
>
> Now, the reason I'm using "obj.foo" here is that obj.foo may be a property
> or otherwise inplemented by __getattr__; arbitrarily complex code may
> be happening to obtain the value it returns. The upshot of that is that
> even this very simple looking code may be an unhandled ZeroDivisionError
> from deeper in the call stack - I don't know it came from the division
> visible in the code example above.
That's true. But if obj.foo raises ZeroDivisionError, perhaps you should
be treating this as equivalent to x/0 and continue, particularly if you
have no control over the obj.foo. What else are you going to do
(assuming that letting the exception propagate is not an option)?
In any case, if you want to guard against such bugs, you can inspect the
traceback and decide what to do. This is particularly easy in Python3.x,
but possible in 2.x as well:
>>> class K:
... def __getattr__(self, name):
... raise ZeroDivisionError
...
>>>
>>> try:
... 1/K().spam
... except ZeroDivisionError as e:
... if e.__traceback__.tb_next is None:
... print("handle division by zero")
... else:
... print("handle bug in __getattr__")
...
handle bug in __getattr__
And it's not like the if test catches all possible errors. There are
many potential error conditions that aren't caught by something like:
if obj.foo != 0:
print(x/obj.foo)
else:
# handle 0
You still have no guarantee that the division will succeed. Perhaps one
or other of x and obj.foo define __truediv__ or __rtruediv__ in such a
way that it raises ZeroDivisionError even when obj.foo is not zero.
That's not necessarily a bug -- one or the other could be an interval
quantity that straddles zero, but isn't equal to zero.
If you fear that obj.foo could contain arbitrarily complex code that may
accidentally raise ZeroDivisionError (or anything else!), the same holds
for __ne__. Once you stop trusting your values, you can't trust
*anything* -- maybe object.foo has the __eq__ and __ne__ reversed. Who
knows? How defensively do you code?
Here's an error condition from the Bad Old Days before IEEE floats:
perhaps division and equality are done to different precisions, such
that y != 0 but x/y still attempts division by zero. (This really used
to happen, on some mainframes.) Obviously this can't happen with Python
floats -- or can it? are IEEE semantics guaranteed? -- but it could
happen with some custom numeric type.
You're right that the try...except idiom is subject to false positives.
It might, under some (presumably rare) circumstances, catch an exception
which should be treated as a bug. But the if...else idiom is subject to
both false positives and false negatives: your test may be too strict,
or it may be not strict enough. Or both at the same time. Or the thing
you are testing may be subject to race conditions:
if os.exists(pathname):
# This is NOT safe!
fp = open(pathname)
else:
print("no such file")
vs.
try:
fp = open(pathname)
except IOError:
print("no such file")
It seems to me that the LBYL idiom is actually *less* safe than the
exception handling idiom.
--
Steven
More information about the Python-ideas
mailing list