[py-svn] commit/pytest: hpk42: some doc fixes and improvements to parametrized test examples, thanks ccxCZ for review and suggestions.

Bitbucket commits-noreply at bitbucket.org
Wed Feb 9 19:53:17 CET 2011


1 new changeset in pytest:

http://bitbucket.org/hpk42/pytest/changeset/48581fdf5bfc/
changeset:   r2156:48581fdf5bfc
user:        hpk42
date:        2011-02-09 14:55:21
summary:     some doc fixes and improvements to parametrized test examples, thanks ccxCZ for review and suggestions.
affected #:  6 files (6.2 KB)

--- a/_pytest/mark.py	Mon Feb 07 11:54:08 2011 +0100
+++ b/_pytest/mark.py	Wed Feb 09 14:55:21 2011 +0100
@@ -89,8 +89,8 @@
 class MarkDecorator:
     """ A decorator for test functions and test classes.  When applied
     it will create :class:`MarkInfo` objects which may be
-    :ref:`retrieved by hooks as item keywords`  MarkDecorator instances
-    are usually created by writing::
+    :ref:`retrieved by hooks as item keywords <excontrolskip>`.
+    MarkDecorator instances are often created like this::
 
         mark1 = py.test.mark.NAME              # simple MarkDecorator
         mark2 = py.test.mark.NAME(name1=value) # parametrized MarkDecorator


--- a/_pytest/python.py	Mon Feb 07 11:54:08 2011 +0100
+++ b/_pytest/python.py	Wed Feb 09 14:55:21 2011 +0100
@@ -537,12 +537,10 @@
             list of calls to the test function will be used.
 
         :arg param: will be exposed to a later funcarg factory invocation
-            through the ``request.param`` attribute.  Setting it (instead of
-            directly providing a ``funcargs`` ditionary) is called
-            *indirect parametrization*.  Indirect parametrization is
-            preferable if test values are expensive to setup or can
-            only be created after certain fixtures or test-run related
-            initialization code has been run.
+            through the ``request.param`` attribute.  It allows to
+            defer test fixture setup activities to when an actual
+            test is run.  Note that request.addcall() is called during
+            the collection phase of a test run.
         """
         assert funcargs is None or isinstance(funcargs, dict)
         if id is None:
@@ -556,7 +554,13 @@
         self._calls.append(CallSpec(funcargs, id, param))
 
 class FuncargRequest:
-    """ A request for function arguments from a test function. """
+    """ A request for function arguments from a test function.
+        
+        Note that there is an optional ``param`` attribute in case
+        there was an invocation to metafunc.addcall(param=...).
+        If no such call was done in a ``pytest_generate_tests``
+        hook, the attribute will not be present.
+    """
     _argprefix = "pytest_funcarg__"
     _argname = None
 


--- a/doc/example/index.txt	Mon Feb 07 11:54:08 2011 +0100
+++ b/doc/example/index.txt	Wed Feb 09 14:55:21 2011 +0100
@@ -12,7 +12,7 @@
 
    reportingdemo.txt
    simple.txt
-   pythoncollection.txt
    mysetup.txt
    parametrize.txt
+   pythoncollection.txt
    nonpython.txt


--- a/doc/example/parametrize.txt	Mon Feb 07 11:54:08 2011 +0100
+++ b/doc/example/parametrize.txt	Wed Feb 09 14:55:21 2011 +0100
@@ -1,3 +1,5 @@
+
+.. _paramexamples:
 
 parametrizing tests
 =================================================
@@ -6,6 +8,137 @@
 parametrization scheme for tests.  Here we provide
 some examples for inspiration and re-use.
 
+generating parameters combinations, depending on command line
+----------------------------------------------------------------------------
+
+.. regendoc:wipe
+
+Let's say we want to execute a test with different parameters
+and the parameter range shall be determined by a command
+line argument.  Let's first write a simple computation test::
+
+    # content of test_compute.py
+
+    def test_compute(param1):
+        assert param1 < 4
+
+Now we add a test configuration like this::
+
+    # content of conftest.py
+
+    def pytest_addoption(parser):
+        parser.addoption("--all", action="store_true",
+            help="run all combinations")
+
+    def pytest_generate_tests(metafunc):
+        if 'param1' in metafunc.funcargnames:
+            if metafunc.config.option.all:
+                end = 5
+            else:
+                end = 2
+            for i in range(end):
+                metafunc.addcall(funcargs={'param1': i})
+
+This means that we only run 2 tests if we do not pass ``--all``::
+
+    $ py.test -q test_compute.py
+    collecting ... collected 2 items
+    ..
+    2 passed in 0.01 seconds
+
+We run only two computations, so we see two dots.
+let's run the full monty::
+
+    $ py.test -q --all
+    collecting ... collected 5 items
+    ....F
+    ================================= FAILURES =================================
+    _____________________________ test_compute[4] ______________________________
+    
+    param1 = 4
+    
+        def test_compute(param1):
+    >       assert param1 < 4
+    E       assert 4 < 4
+    
+    test_compute.py:3: AssertionError
+    1 failed, 4 passed in 0.03 seconds
+
+As expected when running the full range of ``param1`` values
+we'll get an error on the last one.
+
+Defering the setup of parametrizing resources
+---------------------------------------------------
+
+.. regendoc:wipe
+
+The parametrization of test functions happens at collection
+time.  It is often a good idea to setup possibly expensive
+resources only when the actual test is run.  Here is a simple
+example how you can achieve that::
+
+    # content of test_backends.py
+    
+    import pytest
+    def test_db_initialized(db):
+        # a dummy test
+        if db.__class__.__name__ == "DB2":
+            pytest.fail("deliberately failing for demo purposes")
+
+Now we add a test configuration that takes care to generate
+two invocations of the ``test_db_initialized`` function and
+furthermore a factory that creates a database object when
+each test is actually run::
+
+    # content of conftest.py
+
+    def pytest_generate_tests(metafunc):
+        if 'db' in metafunc.funcargnames:
+            metafunc.addcall(param="d1")
+            metafunc.addcall(param="d2")
+
+    class DB1:
+        "one database object"
+    class DB2:
+        "alternative database object"
+    
+    def pytest_funcarg__db(request):
+        if request.param == "d1":
+            return DB1()
+        elif request.param == "d2":
+            return DB2()
+        else:
+            raise ValueError("invalid internal test config")
+
+Let's first see how it looks like at collection time::
+
+    $ py.test test_backends.py --collectonly
+    <Module 'test_backends.py'>
+      <Function 'test_db_initialized[0]'>
+      <Function 'test_db_initialized[1]'>
+
+And then when we run the test::
+
+    $ py.test -q test_backends.py
+    collecting ... collected 2 items
+    .F
+    ================================= FAILURES =================================
+    __________________________ test_db_initialized[1] __________________________
+    
+    db = <conftest.DB2 instance at 0x1a5b488>
+    
+        def test_db_initialized(db):
+            # a dummy test
+            if db.__class__.__name__ == "DB2":
+    >           pytest.fail("deliberately failing for demo purposes")
+    E           Failed: deliberately failing for demo purposes
+    
+    test_backends.py:6: Failed
+    1 failed, 1 passed in 0.02 seconds
+
+Now you see that one invocation of the test passes and another fails,
+as it to be expected.
+
 Parametrizing test methods through per-class configuration
 --------------------------------------------------------------
 
@@ -41,12 +174,23 @@
 the respective settings::
 
     $ py.test -q
-    collecting ... collected 4 items
-    F..F
+    collecting ... collected 6 items
+    .FF..F
     ================================= FAILURES =================================
+    __________________________ test_db_initialized[1] __________________________
+    
+    db = <conftest.DB2 instance at 0xf81c20>
+    
+        def test_db_initialized(db):
+            # a dummy test
+            if db.__class__.__name__ == "DB2":
+    >           pytest.fail("deliberately failing for demo purposes")
+    E           Failed: deliberately failing for demo purposes
+    
+    test_backends.py:6: Failed
     _________________________ TestClass.test_equals[0] _________________________
     
-    self = <test_parametrize.TestClass instance at 0x1521440>, a = 1, b = 2
+    self = <test_parametrize.TestClass instance at 0xf93050>, a = 1, b = 2
     
         def test_equals(self, a, b):
     >       assert a == b
@@ -55,14 +199,14 @@
     test_parametrize.py:17: AssertionError
     ______________________ TestClass.test_zerodivision[1] ______________________
     
-    self = <test_parametrize.TestClass instance at 0x158aa70>, a = 3, b = 2
+    self = <test_parametrize.TestClass instance at 0xf93098>, a = 3, b = 2
     
         def test_zerodivision(self, a, b):
     >       pytest.raises(ZeroDivisionError, "a/b")
     E       Failed: DID NOT RAISE
     
     test_parametrize.py:20: Failed
-    2 failed, 2 passed in 0.03 seconds
+    3 failed, 3 passed in 0.04 seconds
 
 Parametrizing test methods through a decorator
 --------------------------------------------------------------
@@ -103,7 +247,7 @@
     ================================= FAILURES =================================
     _________________________ TestClass.test_equals[0] _________________________
     
-    self = <test_parametrize2.TestClass instance at 0x22a77e8>, a = 1, b = 2
+    self = <test_parametrize2.TestClass instance at 0x27e15a8>, a = 1, b = 2
     
         @params([dict(a=1, b=2), dict(a=3, b=3), ])
         def test_equals(self, a, b):
@@ -113,7 +257,7 @@
     test_parametrize2.py:19: AssertionError
     ______________________ TestClass.test_zerodivision[1] ______________________
     
-    self = <test_parametrize2.TestClass instance at 0x2332a70>, a = 3, b = 2
+    self = <test_parametrize2.TestClass instance at 0x2953bd8>, a = 3, b = 2
     
         @params([dict(a=1, b=0), dict(a=3, b=2)])
         def test_zerodivision(self, a, b):
@@ -142,4 +286,4 @@
    . $ py.test -q multipython.py
    collecting ... collected 75 items
    ....s....s....s....ssssss....s....s....s....ssssss....s....s....s....ssssss
-   48 passed, 27 skipped in 2.09 seconds
+   48 passed, 27 skipped in 1.59 seconds


--- a/doc/example/simple.txt	Mon Feb 07 11:54:08 2011 +0100
+++ b/doc/example/simple.txt	Wed Feb 09 14:55:21 2011 +0100
@@ -84,64 +84,6 @@
 next example or refer to :ref:`mysetup` for more information
 on real-life examples.
 
-generating parameters combinations, depending on command line
-----------------------------------------------------------------------------
-
-.. regendoc:wipe
-
-Let's say we want to execute a test with different parameters
-and the parameter range shall be determined by a command
-line argument.  Let's first write a simple computation test::
-
-    # content of test_compute.py
-
-    def test_compute(param1):
-        assert param1 < 4
-
-Now we add a test configuration like this::
-
-    # content of conftest.py
-
-    def pytest_addoption(parser):
-        parser.addoption("--all", action="store_true",
-            help="run all combinations")
-
-    def pytest_generate_tests(metafunc):
-        if 'param1' in metafunc.funcargnames:
-            if metafunc.config.option.all:
-                end = 5
-            else:
-                end = 2
-            for i in range(end):
-                metafunc.addcall(funcargs={'param1': i})
-
-This means that we only run 2 tests if we do not pass ``--all``::
-
-    $ py.test -q test_compute.py
-    collecting ... collected 2 items
-    ..
-    2 passed in 0.01 seconds
-
-We run only two computations, so we see two dots.
-let's run the full monty::
-
-    $ py.test -q --all
-    collecting ... collected 5 items
-    ....F
-    ================================= FAILURES =================================
-    _____________________________ test_compute[4] ______________________________
-    
-    param1 = 4
-    
-        def test_compute(param1):
-    >       assert param1 < 4
-    E       assert 4 < 4
-    
-    test_compute.py:3: AssertionError
-    1 failed, 4 passed in 0.03 seconds
-
-As expected when running the full range of ``param1`` values
-we'll get an error on the last one.
 
 dynamically adding command line options
 --------------------------------------------------------------
@@ -167,15 +109,15 @@
 
     $ py.test
     =========================== test session starts ============================
-    platform linux2 -- Python 2.6.6 -- pytest-2.0.1
+    platform linux2 -- Python 2.6.6 -- pytest-2.0.2.dev0
     gw0 I / gw1 I / gw2 I / gw3 I
     gw0 [0] / gw1 [0] / gw2 [0] / gw3 [0]
     
     scheduling tests via LoadScheduling
     
-    =============================  in 0.29 seconds =============================
+    =============================  in 0.37 seconds =============================
 
-.. _`retrieved by hooks as item keywords`:
+.. _`excontrolskip`:
 
 control skipping of tests according to command line option
 --------------------------------------------------------------
@@ -214,12 +156,12 @@
 
     $ py.test -rs    # "-rs" means report details on the little 's'
     =========================== test session starts ============================
-    platform linux2 -- Python 2.6.6 -- pytest-2.0.1
+    platform linux2 -- Python 2.6.6 -- pytest-2.0.2.dev0
     collecting ... collected 2 items
     
     test_module.py .s
     ========================= short test summary info ==========================
-    SKIP [1] /tmp/doc-exec-171/conftest.py:9: need --runslow option to run
+    SKIP [1] /tmp/doc-exec-275/conftest.py:9: need --runslow option to run
     
     =================== 1 passed, 1 skipped in 0.02 seconds ====================
 
@@ -227,7 +169,7 @@
 
     $ py.test --runslow
     =========================== test session starts ============================
-    platform linux2 -- Python 2.6.6 -- pytest-2.0.1
+    platform linux2 -- Python 2.6.6 -- pytest-2.0.2.dev0
     collecting ... collected 2 items
     
     test_module.py ..
@@ -319,7 +261,7 @@
 
     $ py.test
     =========================== test session starts ============================
-    platform linux2 -- Python 2.6.6 -- pytest-2.0.1
+    platform linux2 -- Python 2.6.6 -- pytest-2.0.2.dev0
     project deps: mylib-1.1
     collecting ... collected 0 items
     
@@ -342,7 +284,7 @@
 
     $ py.test -v
     =========================== test session starts ============================
-    platform linux2 -- Python 2.6.6 -- pytest-2.0.1 -- /home/hpk/venv/0/bin/python
+    platform linux2 -- Python 2.6.6 -- pytest-2.0.2.dev0 -- /home/hpk/venv/0/bin/python
     info1: did you know that ...
     did you?
     collecting ... collected 0 items
@@ -353,7 +295,7 @@
 
     $ py.test
     =========================== test session starts ============================
-    platform linux2 -- Python 2.6.6 -- pytest-2.0.1
+    platform linux2 -- Python 2.6.6 -- pytest-2.0.2.dev0
     collecting ... collected 0 items
     
     =============================  in 0.00 seconds =============================


--- a/doc/funcargs.txt	Mon Feb 07 11:54:08 2011 +0100
+++ b/doc/funcargs.txt	Wed Feb 09 14:55:21 2011 +0100
@@ -28,6 +28,8 @@
 or with multiple numerical arguments sets and want to reuse the same set
 of test functions.
 
+.. _funcarg:
+
 Basic funcarg example
 -----------------------
 
@@ -196,6 +198,8 @@
     ======================== 9 tests deselected by '7' =========================
     ================== 1 passed, 9 deselected in 0.01 seconds ==================
 
+You might want to look at :ref:`more parametrization examples <paramexamples>`.
+
 .. _`metafunc object`:
 
 The **metafunc** object

Repository URL: https://bitbucket.org/hpk42/pytest/

--

This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.



More information about the pytest-commit mailing list