Best way to assert unit test cases with many conditions

Peter Otten __peter__ at web.de
Wed Jul 19 08:24:11 EDT 2017


Ganesh Pal wrote:

> On Tue, Jul 18, 2017 at 11:02 PM, Dan Strohl <D.Strohl at f5.com> wrote:
> 
>>
>> Like this:
>>
>> Def test_this(self):
>>     For i in range(10):
>>         with self.subTest('test number %s) % i):
>>             self.assertTrue(I <= 5)
>>
>> With the subTest() method, if anything within that subTest fails, it
>> won't stop the process and will continue with the next step.

> Thanks for reading my email and yes you got it right , I am adding bunch
> of
> same subtest and all are similar and sub test that change only  differ in
> parameter.


> But I can’t use the loop that you have mentioned because I want to achieve
> (1) and (2)

> (1) I would want my subtest to have  a *Condition* based on which it  that
> would pass my entire test  if any of the sub-test passed.

Your spec translates to something like:

$ cat stop_on_first_success.py          
import logging

import unittest
import sys

log = logging.getLogger()

class T(unittest.TestCase):
    def test_foo(self):
        subtests = sorted(
            name for name in dir(self) if name.startswith("subtest_foo_")
        )
        for name in subtests:
            method = getattr(self, name)
            try:
                method()
            except Exception as err:
                log.error(err)
            else:
                break
        else:
            self.fail("no successful subtest")

    def subtest_foo_01_int(self):
        self.assertTrue(isinstance(x, int))
    def subtest_foo_02_42(self):
        self.assertEqual(x, 42)
    def subtest_foo_03_upper(self):
        self.assertEqual(x.upper(), x)

if __name__ == "__main__":
    logging.basicConfig()

    x = sys.argv.pop(1)
    x = eval(x)
    print("Running tests with x = {!r}".format(x))

    unittest.main()

The x = eval() part is only for demonstration purposes. 

Below's the script output for various incantations. The subtests are 
executed in alphabetical order of the subtest_foo_xxx method names, failures 
are logged, and the loop stops after the first success.

$ python3 stop_on_first_success.py '"foo"'
Running tests with x = 'foo'
ERROR:root:False is not true
ERROR:root:'foo' != 42
ERROR:root:'FOO' != 'foo'
- FOO
+ foo

F
======================================================================
FAIL: test_foo (__main__.T)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "stop_on_first_success.py", line 22, in test_foo
    self.fail("no successful subtest")
AssertionError: no successful subtest

----------------------------------------------------------------------
Ran 1 test in 0.001s

FAILED (failures=1)
$ python3 stop_on_first_success.py '"FOO"'
Running tests with x = 'FOO'
ERROR:root:False is not true
ERROR:root:'FOO' != 42
.
----------------------------------------------------------------------
Ran 1 test in 0.001s

OK
$ python3 stop_on_first_success.py '42'
Running tests with x = 42
.
----------------------------------------------------------------------
Ran 1 test in 0.000s

OK
$ python3 stop_on_first_success.py '42.'
Running tests with x = 42.0
ERROR:root:False is not true
.
----------------------------------------------------------------------
Ran 1 test in 0.001s

OK

However, for my taste such a test is both too complex and too vague. If you 
have code that tries to achieve something in different ways then put these 
attempts into functions that you can test individually with specific data 
that causes them to succeed or fail.





More information about the Python-list mailing list