(and about tests) Re: Pedantic pickling error after reload?

Diez B. Roggisch deets at nospam.web.de
Fri Feb 26 08:56:56 EST 2010


> at that point of comparison the module is already identical ("klass =
> getattr(mod, name)")

Ah, didn't know that context.

>> even more corner-cases. Python's import-mechanism can sometimes be
>> rather foot-shoot-prone.
>
> still don't see a real reason against the mere module+name comparison.
> same issues as during pickle.load. Just the class object is renewed
> (intentionally)
>
> If there are things with nested classes etc, the programmer will have to
> rethink things on a different level: design errors. a subject for
> pychecker/pylint - not for breaking pickle .dump ... ?

I don't say it necessarily breaks anything. I simply don't know enough 
about it. It might just be that back then, identity was deemed enough to 
check, but you can well argue your case on the python-dev list, 
providing a patch + tests that ensure there is no regression.

> well, reloading is the thing which I do most in coding practice :-)
> For me its a basic thing like cell proliferation in biology.

I simply never do it. It has subtle issues, one of them you found, 
others you say you work around by introducing actual frameworks. But you 
might well forget some corner-cases & suddently chase a chimera you deem 
a bug, that in fact is just an unwanted side-effect of reloading.

And all this extra complexity is only good for the process of actually 
changing the code. It doesn't help you maintaining code quality.

> Reentering into the same (complex) app state for evolving those
> thousands of small thing (where a full parallel test coverage doesn't
> work out) is a major dev time consuming factor in bigger projects - in
> C, Java projects and even with other dynamic languages.
> Dynamic classes are a main reason why I use Python (adopted from Lisp
> long time ago; is that reload thing here possible with Ruby too?)

So what? If this kind of complex, through rather lengthy interactions 
evolved state is the thing you need to work within, that's reason enough 
for me to think about how to automate setting this very state up. That's 
what programming is about - telling a computer to do things it can do, 
which usually means it does them *much* faster & *much* more reliable 
than humans do.

Frankly, I can't be bothered with clicking through layers of GUIs to 
finally reach the destination I'm actually interested in. Let the 
computer do that. And once I teached him how so, I just integrate that 
into my test-suite.


> I typically need just 1 full app reboot on 20..50 edit-run-cycles I
> guess. And just few unit test runs per release. Even for
> Cython/pyximport things I added support for this reload edit-run-cycle,
> because I cannot imagine to dev without this.

Let me assure you - it works :)

for example yesterday, I create a full CRUD-interface for a web-app 
(which is the thing I work on mostly these days) without *once* taking a 
look at the browser. I wrote actions, forms, HTML, and tests along, 
developed the thing ready, asserted certain constraints and error-cases, 
and once finished, fired up the browser - and he saw, it worked!

Yes, I could have written that code on the fly, hitting F5 every few 
seconds/minutes to see if things work out (instead of just running the 
specific tests through nose) - and once I'd be finished, I didn't have 
anything permanent that ensured the functionality over time.

> this is a comfortable quasi religious theory raised often and easily
> here and there - impracticable and very slow on that fine grained code
> evolution level however. an interesting issue.

To me, that's as much as an religious statement often heard by people 
that aren't (really) into test-driven development. By which I personally 
don't mean the variant where one writes tests first, and then code. I 
always develop both in lock-step, sometimes introducing a new feauter 
first in my test as e.g. new arguments, or new calls, and then 
implementing them, but as often the other way round.

The argument is always a variation of "my problem is to complicated, the 
code-base to interviened to make it possible to test this".

I call this a bluff. You might work with a code-base that makes it 
harder than needed to write tests for new functionality. But then, most 
of the time this is a sign of lack of design. Writing with testability 
in mind makes you think twice about how to proper componentize your 
application, clearly separate logic from presentation, validates 
API-design because using the API is immediatly done when writing the 
tests you need, and so forth.

>
> I do unit tests for getting stability on a much higher level where/when
> things and functionality are quite wired.
> Generally after having compared I cannot confirm that "write always
> tests before development" ideologies pay off in practice.
> "Reload > pychecker/pylint > tests" works most effectively with Python
> in my opinion.
> And for GUI-development the difference is max.
> (min for math algorithms which are well away from data structures/OO)

As I said, I mainly do web these days. Which can be considered GUIs as 
well. Testing the HTTP-interface is obviously easier & possible, and 
what I described earlier.

But we also use selenium to test JS-driven interfaces, as now the 
complexity of the interface rises, with all the bells & whistles of 
ajaxiness and whatnot.


> Another issue regarding tests IMHO is, that one should not waste the
> "validation power" of unit tests too easily for permanent low level
> evolution purposes because its a little like bacteria becoming resistent
> against antibiotics: Code becoming 'fit' against artificial tests, but
> not against real word.

That's why I pull in the real world as well. I don't write unit-tests 
only (in fact, I don't particularily like that term, because of it's 
narrow-minded-ness), I write tests for whatever condition I envision 
*or* encounter.

If anything that makes my systems fail is reproducable, it becomes a new 
test - and ensures this thing isn't ever happening again.

Granted, though: there are things you can't really test, especially in 
cases where you interact with different other agents that might behave 
(to you) erratically.

I've done robot developent as well, and of course testing e.g. an 
acceleration ramp dependend on ground conditions isn't something a 
simple unit-test can reproduce.

but then... I've written some, and made sure the robot was in a 
controlled environment when executing them :)

All in all, this argument is *much* to often used as excuse to simply 
not go to any possible length to make your system testable as far as it 
possibly can be. And in my experience, that's further than most people 
think. And as a consequence, quality & stability as well as design of 
the application suffer.

> A rule that unit tests are used only near a release or a milestone is
> healthy in that sense I think.
> (And a quick edit-(real)run-interact cycle is good for speed)

Nope, not in my opinion. Making tests an afterthought may well lead to 
them being written carelessly, not capturing corner-cases you 
encountered while actually developing, and I don't even buy the speed 
argument, as I already said - most of the times, the computer is faster 
than you setting up the environment for testing, how complex ever that 
may be.

Testing is no silver bullet. But it's a rather mighte sword.. :)

Diez



More information about the Python-list mailing list