Best way to assert unit test cases with many conditions

Terry Reedy tjreedy at udel.edu
Wed Jul 19 14:01:41 EDT 2017


On 7/19/2017 8:24 AM, Peter Otten wrote:
> Ganesh Pal wrote:

>> (1) I would want my subtest to have  a *Condition* based on which it  that
>> would pass my entire test  if any of the sub-test passed.

If I understand correctly, you want

assertTrue(subtest1 or subtest2 or subtest3 or subtest4 ...)

or

assertTrue(any(iterable_of_subtests))

Each 'subtestn' can be an assertion or expression or function call.
Peter's code below implements the general idea above in the any form 
with function calls in your particular situation where you also want to 
log subtest failures without failing the overall test.  (The 'any' 
builtin or's together an indefinite number of items.  The 'all' builtin 
and's multiple items.)

> Your spec translates to something like:
> 
> $ cat stop_on_first_success.py
> import logging
> 
> import unittest
> import sys
> 
> log = logging.getLogger()
> 
> class T(unittest.TestCase):
>      def test_foo(self):
>          subtests = sorted(
>              name for name in dir(self) if name.startswith("subtest_foo_")
>          )
>          for name in subtests:
>              method = getattr(self, name)
>              try:
>                  method()
>              except Exception as err:
>                  log.error(err)
>              else:
>                  break
>          else:
>              self.fail("no successful subtest")
> 
>      def subtest_foo_01_int(self):
>          self.assertTrue(isinstance(x, int))
>      def subtest_foo_02_42(self):
>          self.assertEqual(x, 42)
>      def subtest_foo_03_upper(self):
>          self.assertEqual(x.upper(), x)
> 
> if __name__ == "__main__":
>      logging.basicConfig()
> 
>      x = sys.argv.pop(1)
>      x = eval(x)
>      print("Running tests with x = {!r}".format(x))
> 
>      unittest.main()
> 
> The x = eval() part is only for demonstration purposes.
> 
> Below's the script output for various incantations. The subtests are
> executed in alphabetical order of the subtest_foo_xxx method names, failures
> are logged, and the loop stops after the first success.
> 
> $ python3 stop_on_first_success.py '"foo"'
> Running tests with x = 'foo'
> ERROR:root:False is not true
> ERROR:root:'foo' != 42
> ERROR:root:'FOO' != 'foo'
> - FOO
> + foo
> 
> F
> ======================================================================
> FAIL: test_foo (__main__.T)
> ----------------------------------------------------------------------
> Traceback (most recent call last):
>    File "stop_on_first_success.py", line 22, in test_foo
>      self.fail("no successful subtest")
> AssertionError: no successful subtest
> 
> ----------------------------------------------------------------------
> Ran 1 test in 0.001s
> 
> FAILED (failures=1)
> $ python3 stop_on_first_success.py '"FOO"'
> Running tests with x = 'FOO'
> ERROR:root:False is not true
> ERROR:root:'FOO' != 42
> .
> ----------------------------------------------------------------------
> Ran 1 test in 0.001s
> 
> OK
> $ python3 stop_on_first_success.py '42'
> Running tests with x = 42
> .
> ----------------------------------------------------------------------
> Ran 1 test in 0.000s
> 
> OK
> $ python3 stop_on_first_success.py '42.'
> Running tests with x = 42.0
> ERROR:root:False is not true
> .
> ----------------------------------------------------------------------
> Ran 1 test in 0.001s
> 
> OK
> 
> However, for my taste such a test is both too complex and too vague. If you
> have code that tries to achieve something in different ways then put these
> attempts into functions that you can test individually with specific data
> that causes them to succeed or fail.
> 
> 


-- 
Terry Jan Reedy




More information about the Python-list mailing list