optimization question

brueckd at tbye.com brueckd at tbye.com
Mon Aug 12 13:21:48 EDT 2002


On Mon, 12 Aug 2002, Andrew Koenig wrote:

> Peter> Maybe I'm spoiled by XP and having so many unit tests that
> Peter> I am willing to refactor aggressively like this without any
> Peter> qualms...
> 
> Maybe you haven't been bitten badly enough yet.
> 
> As I understand it, one of the tenets of XP is that once the tests
> pass, you're done.  The trouble with that notion is that I have seen
> too many cases in which programs have bugs that no amount of testing
> can ever reveal with certainty.
> 
> Such bugs are often associated with semantically unsafe languages
> (such as C or C++) or language features (such as threading), but not
> always.  In fact, the hardest such bug that I can remember
> encountering in my own code was in a program written in a semantically
> safe language.

I'm not sure what the official XP "doctrine" is, but in practice what I've 
seen to work well is to do as much testing is as reasonably possible but 
realize that bugs still exist. Then when a new bug gets discovered, that 
realization encourages root-cause and tip-of-the-iceberg analysis. 
Root-cause analysis has you looking at why that bug occurred, why it 
slipped through testing, why it wasn't caught in peer review, etc., and 
tip-of-the-iceberg analysis will send you searching for the whole class of 
related bugs. You milk those results for awhile (the product of which is 
more tests and hopefully an improvment in your processes) and then move 
on. Good testing is oh-so-important, but IMO without this analysis they 
give too much of a false sense of security (but they're still much better 
than no tests).

As far as safely changing (refactoring, usually) large blocks of code that 
should yield completely backwards-compatible results, the only way I've 
ever managed to come close to pulling that off is if my automated tests 
include lots and lots of real-world data: actual data and test cases from 
several different customers. It helps if you initially get a lot of real 
data from customers, but you can also collect it over time as bugs come in 
(each bug that gets fixed should result in one more more new tests - often 
the best thing to do is capture the actual data and real world case that 
exposed the bug). The times I've tried big changes without real test data 
resulted in embarrassing "conversion guides" or "upgrade roadmaps". :-)

-Dave





More information about the Python-list mailing list