[New-bugs-announce] [issue18968] Find a way to detect regressions in test execution

Nick Coghlan report at bugs.python.org
Sun Sep 8 05:05:34 CEST 2013


New submission from Nick Coghlan:

Issue 18952 (fixed in http://hg.python.org/cpython/rev/23770d446c73) was another case where a test suite change resulted in tests not be executed as expected, but this wasn't initially noticed since it didn't *fail* the tests, it just silently skipped them.

We've had similar issues in the past, due to test name conflicts (so the second test shadowed the first), to old regrtest style test discovery missing a class name from the test list, and to incorrect skip conditions on platform specific tests.

Converting "unexpected skips" to a failure isn't enough, since these errors occur at a narrower scope than entire test modules.

I'm not sure on what *would* work, though. Perhaps collecting platform specific coverage stats for the test suite itself and looking for regressions?

----------
messages: 197218
nosy: ncoghlan
priority: normal
severity: normal
status: open
title: Find a way to detect regressions in test execution
type: enhancement

_______________________________________
Python tracker <report at bugs.python.org>
<http://bugs.python.org/issue18968>
_______________________________________


More information about the New-bugs-announce mailing list