[SciPy-dev] Test Design Question for Stats.Models Code
Skipper Seabold
jsseabold at gmail.com
Fri Jul 17 13:18:04 EDT 2009
Hello all,
I am polishing up the generalized linear models right now for the
stats.models project and I have a question about using decorators with
my tests. The GLM framework has a central model with shared
properties and then several variations on this model, so to test I
have just as a simplified example:
from numpy.testing import *
DECIMAL = 4
class check_glm(object):
'''
res2 results will be obtained from R or the RModelwrap
'''
def test_params(self):
assert_almost_equal(self.res1.params, self.res2.params, DECIMAL)
def test_resids(self):
assert_almost_equal(self.res1.resids, self.res2.resids, DECIMAL)
class test_glm_gamma(check_glm):
def __init__(self):
# Preprocessing to setup results
self.res1 = ResultsFromGLM
self.res2 = R_Results
if __name__=="__main__":
run_module_suite()
My question is whether I can skip, for arguments sake, test_resids
depending, for example, on the class of self.res2 or because I defined
the test condition as True in the test_<> class. I tried putting in
the check_glm class
@dec.skipif(TestCondition, "Skipping this test because of ...")
def test_resids(self):
...
TestCondition should be None by default, but how can I get the value
of TestCondition to evaluate to True if appropriate? I have tried a
few different ways, but I am a little stumped. Does this make sense/is
it possible? I'm sure I'm missing something obvious, but any insights
would be appreciated.
Cheers,
Skipper
More information about the SciPy-Dev
mailing list