[Numpy-discussion] ticket 788: possible blocker

Travis E. Oliphant oliphant at enthought.com
Tue May 13 12:12:06 EDT 2008


Stéfan van der Walt wrote:
> Hi Travis
>
> 2008/5/13 Travis E. Oliphant <oliphant at enthought.com>:
>   
>>  I think Stefan is asking me, not you.    I don't think you should feel
>>  any sense of guilt.   I was the one who closed the ticket sans
>>  regression test.   I tend to still be of the opinion that a bug fix
>>  without a regression test is better than no bug fix at all.
>>     
>
> I suppose people may be getting tired of me singing the same tune all
> the time, but I wouldn't do it if I weren't deeply convinced of the
> improvement in software quality resulting from good test coverage.
> This is maybe even more true for NumPy than for other packages,
> illustrated by the recent discussion on masked arrays.  The unit tests
> act as a contract between ourselves and our users, and if this
> contract is lacking (or missing!), we cannot guarantee that APIs or
> even functionality will remain unbroken.  It may be best if we could
> formalise the policy around this, so that I can either keep quiet or
> expect a regression test with every check-in.
>   
I'm going to strongly oppose any "formalization" of such a policy.  

I think these sorts of things are best handled by reminding people of 
the benefits of tests and documentation and setting a good example 
rather than rigidly holding to a "policy" that does not encompass all 
situations equally well.  

Some bug fixes are valuable even if they don't have a unit test 
associated with them (such as IMHO the one under discussion here), 
especially if creating the unit test would take additional time that the 
bug fixer doesn't have and would lead to the bug not being fixed. 


Process does not create quality, people do.    


Processes, like unit testing, can help, but usually have their own sets 
of flaws  as well (how often is the "bug" actually a unit-test bug and 
how often do we get a false sense of security because of the lack of 
code coverage).   The unit test system is far from perfect and there is 
quite a bit of overhead in constructing them given the current framework 
(yes, I know we are fixing that...) 

NumPy would not exist if I had followed the process you seem to want 
now.   I'm happy to improve the number of "unit-tests" written where I 
can, but I'm not going to be pigeon-holed into following a "process" 
that *requires* a unit-test for every check in, and I can't in good 
conscience push such a policy on others.

Besides,  having a "test-per-checkin" is not the proper mapping in my 
mind.   I'd rather see whole check-ins devoted to testing large pieces 
of code rather than spend all unit-test foo on a rigid policy of 
"regression" testing each check-in.


-Travis




More information about the NumPy-Discussion mailing list