[Python-Dev] Avoiding cascading test failures

Alexandre Vassalotti alexandre at peadrop.com
Sun Sep 2 19:14:45 CEST 2007


On 8/28/07, Collin Winter <collinw at gmail.com> wrote:
> On 8/22/07, Alexandre Vassalotti <alexandre at peadrop.com> wrote:
> > When I was fixing tests failing in the py3k branch, I found the number
> > duplicate failures annoying. Often, a single bug, in an important
> > method or function, caused a large number of testcase to fail. So, I
> > thought of a simple mechanism for avoiding such cascading failures.
> >
> > My solution is to add a notion of dependency to testcases. A typical
> > usage would look like this:
> >
> >     @depends('test_getvalue')
> >     def test_writelines(self):
> >         ...
> >         memio.writelines([buf] * 100)
> >         self.assertEqual(memio.getvalue(), buf * 100)
> >         ...
>
> This definitely seems like a neat idea. Some thoughts:
>
> * How do you deal with dependencies that cross test modules? Say
> test A depends on test B, how do we know whether it's worthwhile
> to run A if B hasn't been run yet? It looks like you run the test
> anyway (I haven't studied the code closely), but that doesn't
> seem ideal.

I am not sure what you mean by "test modules". Do you mean module in
the Python sense, or like a test-case class?

> * This might be implemented in the wrong place. For example, the [x
> for x in dir(self) if x.startswith('test')] you do is most certainly
> better-placed in a custom TestLoader implementation.

That certainly is a good suggestion. I am not sure yet how I will
implement my idea in the unittest module. However, I pretty sure that
it will be quite different from my prototype.

> But despite that, I think it's a cool idea and worth pursuing. Could
> you set up a branch (probably of py3k) so we can see how this plays
> out in the large?

Sure. I need to finish merging pickle and cPickle for Py3k before
tackling this project, though.

-- Alexandre


More information about the Python-Dev mailing list