Pylint false positives
Steven D'Aprano
steve+comp.lang.python at pearwood.info
Fri Aug 17 02:29:07 EDT 2018
On Wed, 15 Aug 2018 10:44:57 +0000, Jon Ribbens wrote:
> Obviously what I'm objecting to is not the presence of a loop, it's the
> presence of many obscure, bodgey and/or broken features all lumped
> together:
> * using @abstractmethod without metaclass=ABCMeta
I'll cop to that one -- it was a bug in my made-up pseudo-code. I simply
assumed that people would know you need to use ABCMeta.
Meap culpa.
On the other hand, your objection to the following three idioms is as
good an example of the Blurb Paradox as I've ever seen. Python is
designed to do these sorts of things. Failing to use them when it
simplifies your code is like hiring a fork-lift to lift and turn your car
around because using reverse gear is "obscure, bodgey and/or broken".
> * code running directly under the class definition
> * creating a method then changing its name with foo.__name__
> * poking things into to the class namespace with locals()
Each of these are standard Python techniques, utterly unexceptional.
"Code running directly under the class" describes every use of the class
keyword (except those with an empty body). If you write:
class Spam:
x = 1
you are running code under the class. This is not just a pedantic
technicality, it is fundamental to Python's execution model. If someone
does not understand this fact, they don't understand Python.
The Python interpreter goes to great lengths to insert the class
namespace into the execution scope while executing code under the class
body, and has done so since Python 1.5 or older, even before it had
lexical scoping!
[steve at ando ~]$ python1.5
Python 1.5.2 (#1, Aug 27 2012, 09:09:18) [GCC 4.1.2 20080704 (Red Hat
4.1.2-52)] on linux2
Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
>>> x = 23
>>> class Spam:
... x = 42
... y = x + 1
...
>>> print x, Spam.x, Spam.y
23 42 43
"Creating a method then changing its name" -- that's precisely what we do
anytime we create a closure and modify it using functools.wraps. The only
difference is that wraps hides that behind a decorator. Some of us have
been using Python long enough to remember when the standard idiom for
decorating a function was to manually adjust its name, docstring etc:
# Something like this.
def decorate(func):
def inner(arg):
preprocess()
x = func(arg)
postprocess()
return x
inner.__name__ = func.__name__
inner.__doc__ = func.__doc__
inner.__dict__.update(func.__dict__)
return inner
Now functools.wraps hides all that behind a convenient decorator
interface, but we're still changing the name.
If you *don't* change the function's name, you're effectively populating
your code with functions that might as well be anonymous, or at least
have generic names like "inner". Why would you not give your functions
meaningful names, just because they came out of a factory?
"Poking things into the class namespace with locals()" -- I'll admit that
this one is a little rarer. But again, the interpreter goes to deliberate
lengths to ensure that locals in a class body is available for writing.
If we're not supposed to use locals, why do you think Python makes it
available? Its not like writing to locals inside a class is any more
bizarre than using getattr, its just less common.
Not one of those three techniques is "bodgey and/or broken", and if you
think they're obscure, well, I'm sorry but that says more about the
breadth of your Python knowledge than the feature itself.
Nobody calling themselves an experienced Python programmer ought to have
any trouble reading and maintaining code like that I showed. If they do,
they have a serious hole in their knowledge of the language, like a C
programmer who doesn't get pointers.
> * dynamically adding @abstractmethod methods to a class
I simply don't get this objection at all. All methods are added
dynamically to classes (that's how Python's execution model works, def is
an executable statement not a declaration). Making them abstract doesn't
change this.
You might be thinking of the warning in the docs:
"Dynamically adding abstract methods to a class, [...]
[is] not supported."
but that is talking about the case where you add the method to the class
after the class is created, from the outside:
class X(metaclass=ABCMeta):
def method(self):
pass
# Doesn't work.
X.method = abstractmethod(X.method)
But this works exactly as you would expect:
py> class Foo(metaclass=abc.ABCMeta):
... for name in ('spam', 'eggs'):
... @abc.abstractmethod
... def inner(self): pass
... inner.__name__ = name
... locals()[name] = inner
... del inner
...
py> class Bar(Foo):
... pass
...
py> Bar()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: Can't instantiate abstract class Bar with abstract
methods eggs, spam
> (Not to mention your code means the methods cannot have meaningful
> docstrings.)
Of course they can, provided they're all identical, give or take some
simple string substitutions.
The idea here is to remove boilerplate code, not to have to do large
amounts of significant computation for each placeholder. If it required
more than a few template substitutions:
inner.__doc__ %= (a, b)
at that point I'd bite the bullet and prefer the pain of swaths of dumb
boilerplate. But simple transformations are no big deal. You create the
method, set the docstring, change its name, set it to abstract, and make
it an attribute of the class.
Who can't reason about four simple steps like that? Cross out the method-
specific details, using a "widget" instead:
class X:
for name in ('red', 'green', 'blue', 'yellow'):
widget = Widget(1, 2, 3)
widget.set_state('not ready')
widget.serial_number = get_serial_number()
locals()[name] = widget
Anyone who couldn't reason about that probably shouldn't be calling
themselves a programmer. Making the widgets methods instead doesn't
change that.
> I would refuse a pull request containing code such as the above, unless
> the number of methods being dynamically created was much larger than 4,
> in which case I would refuse it because the design of a class requiring
> huge numbers of dynamically created methods is almost certainly
> fundamentally broken.
If a class has "huge" (what, a hundred? a thousand?) methods, regardless
of whether they are abstract or concrete or generated inside a factory or
written out by hand, the class probably does too much.
But a class with 30 methods is fine (strings have at least 50), and if
six or ten of them are generated by a factory, what's the big deal?
Writing out methods with identical bodies is brainless boilerplate. I
don't clog up my code with brainless boilerplate unless there is a really
good reason for it.
There always a trade-off to be made, choosing between the (slight) extra
complexity of a factory solution versus the tedious, error-prone volume
of boilerplate, so in practice, I probably wouldn't switch to a factory
solution for merely four methods with empty bodies. But I certainly would
for eight.
When making this trade-off, "my developers don't understand Python's
execution model or its dynamic features" is not a good reason to stick to
large amounts of mindless code. That's a good reason to send the
developer in question to a good Python course to update their skills.
(Of course if you can't do this for political or budget reasons, I
sympathise.)
--
Steven D'Aprano
"Ever since I learned about confirmation bias, I've been seeing
it everywhere." -- Jon Ronson
More information about the Python-list
mailing list