[Pytest-commit] commit/pytest: 5 new changesets

commits-noreply at bitbucket.org commits-noreply at bitbucket.org
Wed May 8 07:43:23 CEST 2013


5 new commits in pytest:

https://bitbucket.org/hpk42/pytest/commits/8dfa43a47cca/
Changeset:   8dfa43a47cca
User:        hpk42
Date:        2013-05-07 16:26:56
Summary:     don't use indexservers anymore
Affected #:  1 file

diff -r 37ebb8278004f6aa99f4deafe032c790c5cc5e99 -r 8dfa43a47ccac80dbf252d35c2156ae9e0d38637 tox.ini
--- a/tox.ini
+++ b/tox.ini
@@ -1,17 +1,13 @@
 [tox]
 distshare={homedir}/.tox/distshare
 envlist=py25,py26,py27,py27-nobyte,py32,py33,py27-xdist,trial
-indexserver=
-    pypi = https://pypi.python.org/simple
-    testrun = http://pypi.testrun.org
-    default = http://pypi.testrun.org
 
 [testenv]
 changedir=testing
 commands= py.test --lsof -rfsxX --junitxml={envlogdir}/junit-{envname}.xml []
 deps=
-    :pypi:pexpect
-    :pypi:nose
+    pexpect
+    nose
 
 [testenv:genscript]
 changedir=.
@@ -21,8 +17,8 @@
 changedir=.
 basepython=python2.7
 deps=pytest-xdist
-    :pypi:mock
-    :pypi:nose
+    mock
+    nose
 commands=
   py.test -n3 -rfsxX \
         --junitxml={envlogdir}/junit-{envname}.xml testing
@@ -39,8 +35,8 @@
 
 [testenv:trial]
 changedir=.
-deps=:pypi:twisted
-     :pypi:pexpect
+deps=twisted
+     pexpect
 commands=
   py.test -rsxf testing/test_unittest.py \
         --junitxml={envlogdir}/junit-{envname}.xml {posargs:testing/test_unittest.py}
@@ -51,17 +47,17 @@
 
 [testenv:py32]
 deps=
-    :pypi:nose
+    nose
 
 [testenv:py33]
 deps=
-    :pypi:nose
+    nose
 
 [testenv:doc]
 basepython=python
 changedir=doc/en
-deps=:pypi:sphinx
-     :pypi:PyYAML
+deps=sphinx
+     PyYAML
 
 commands=
     make clean
@@ -70,15 +66,15 @@
 [testenv:regen]
 basepython=python
 changedir=doc/en
-deps=:pypi:sphinx
-     :pypi:PyYAML
+deps=sphinx
+     PyYAML
 commands=
     rm -rf /tmp/doc-exec*
     #pip install pytest==2.3.4
     make regen
 
 [testenv:py31]
-deps=:pypi:nose>=1.0
+deps=nose>=1.0
 
 [testenv:py31-xdist]
 deps=pytest-xdist


https://bitbucket.org/hpk42/pytest/commits/35e73638a6e4/
Changeset:   35e73638a6e4
User:        hpk42
Date:        2013-05-07 18:40:26
Summary:     support boolean condition expressions in skipif/xfail
change documentation to prefer it over string expressions
Affected #:  6 files

diff -r 8dfa43a47ccac80dbf252d35c2156ae9e0d38637 -r 35e73638a6e4a1ae8b9b14237da4469960085d05 CHANGELOG
--- a/CHANGELOG
+++ b/CHANGELOG
@@ -1,9 +1,15 @@
-Changes between 2.3.5 and DEV
+Changes between 2.3.5 and 2.4.DEV
 -----------------------------------
 
 - (experimental) allow fixture functions to be 
   implemented as context managers.  Thanks Andreas Pelme,
-  ladimir Keleshev.
+  Vladimir Keleshev.
+
+- (experimental) allow boolean expression directly with skipif/xfail
+  if a "reason" is also specified.  Rework skipping documentation
+  to recommend "condition as booleans" because it prevents surprises
+  when importing markers between modules.  Specifying conditions
+  as strings will remain fully supported.
 
 - fix issue245 by depending on the released py-1.4.14
   which fixes py.io.dupfile to work with files with no

diff -r 8dfa43a47ccac80dbf252d35c2156ae9e0d38637 -r 35e73638a6e4a1ae8b9b14237da4469960085d05 _pytest/__init__.py
--- a/_pytest/__init__.py
+++ b/_pytest/__init__.py
@@ -1,2 +1,2 @@
 #
-__version__ = '2.3.6.dev3'
+__version__ = '2.4.0.dev1'

diff -r 8dfa43a47ccac80dbf252d35c2156ae9e0d38637 -r 35e73638a6e4a1ae8b9b14237da4469960085d05 _pytest/skipping.py
--- a/_pytest/skipping.py
+++ b/_pytest/skipping.py
@@ -89,7 +89,11 @@
                     if isinstance(expr, py.builtin._basestring):
                         result = cached_eval(self.item.config, expr, d)
                     else:
-                        pytest.fail("expression is not a string")
+                        if self.get("reason") is None:
+                            # XXX better be checked at collection time
+                            pytest.fail("you need to specify reason=STRING "
+                                        "when using booleans as conditions.")
+                        result = bool(expr)
                     if result:
                         self.result = True
                         self.expr = expr

diff -r 8dfa43a47ccac80dbf252d35c2156ae9e0d38637 -r 35e73638a6e4a1ae8b9b14237da4469960085d05 doc/en/skipping.txt
--- a/doc/en/skipping.txt
+++ b/doc/en/skipping.txt
@@ -9,86 +9,110 @@
 or that you expect to fail you can mark them accordingly or you
 may call helper functions during execution of setup or test functions.
 
-A *skip* means that you expect your test to pass unless a certain
-configuration or condition (e.g. wrong Python interpreter, missing
-dependency) prevents it to run.  And *xfail* means that your test
-can run but you expect it to fail because there is an implementation problem.
+A *skip* means that you expect your test to pass unless the environment
+(e.g. wrong Python interpreter, missing dependency) prevents it to run.  
+And *xfail* means that your test can run but you expect it to fail
+because there is an implementation problem.
 
-py.test counts and lists *skip* and *xfail* tests separately. However,
-detailed information about skipped/xfailed tests is not shown by default
-to avoid cluttering the output.  You can use the ``-r`` option to see
-details corresponding to the "short" letters shown in the test
-progress::
+py.test counts and lists *skip* and *xfail* tests separately. Detailed 
+information about skipped/xfailed tests is not shown by default to avoid
+cluttering the output.  You can use the ``-r`` option to see details
+corresponding to the "short" letters shown in the test progress::
 
     py.test -rxs  # show extra info on skips and xfails
 
 (See :ref:`how to change command line options defaults`)
 
 .. _skipif:
+.. _`condition booleans`:
 
 Marking a test function to be skipped
 -------------------------------------------
 
+.. versionadded:: 2.4
+
 Here is an example of marking a test function to be skipped
-when run on a Python3 interpreter::
+when run on a Python3.3 interpreter::
 
     import sys
-    @pytest.mark.skipif("sys.version_info >= (3,0)")
+    @pytest.mark.skipif(sys.version_info >= (3,3), 
+                        reason="requires python3.3")
     def test_function():
         ...
 
-During test function setup the skipif condition is
-evaluated by calling ``eval('sys.version_info >= (3,0)', namespace)``.
-(*New in version 2.0.2*) The namespace contains all the module globals of the test function so that
-you can for example check for versions of a module you are using::
+During test function setup the condition ("sys.version_info >= (3,3)") is
+checked.  If it evaluates to True, the test function will be skipped
+with the specified reason.  Note that pytest enforces specifying a reason 
+in order to report meaningful "skip reasons" (e.g. when using ``-rs``).
+
+You can share skipif markers between modules.  Consider this test module::
+
+    # content of test_mymodule.py
 
     import mymodule
-
-    @pytest.mark.skipif("mymodule.__version__ < '1.2'")
-    def test_function():
-        ...
-  
-The test function will not be run ("skipped") if
-``mymodule`` is below the specified version.  The reason
-for specifying the condition as a string is mainly that
-py.test can report a summary of skip conditions.
-For information on the construction of the ``namespace``
-see `evaluation of skipif/xfail conditions`_.
-
-You can of course create a shortcut for your conditional skip
-decorator at module level like this::
-
-    win32only = pytest.mark.skipif("sys.platform != 'win32'")
-
-    @win32only
+    minversion = pytest.mark.skipif(mymodule.__versioninfo__ >= (1,1),
+                                    reason="at least mymodule-1.1 required")
+    @minversion
     def test_function():
         ...
 
-Skip all test functions of a class
---------------------------------------
+You can import it from another test module::
+
+    # test_myothermodule.py
+    from test_mymodule import minversion
+
+    @minversion
+    def test_anotherfunction():
+        ...
+
+For larger test suites it's usually a good idea to have one file
+where you define the markers which you then consistently apply
+throughout your test suite.
+
+Alternatively, the pre pytest-2.4 way to specify `condition strings <condition strings>`_ instead of booleans will remain fully supported in future 
+versions of pytest.  It couldn't be easily used for importing markers
+between test modules so it's no longer advertised as the primary method.
+
+
+Skip all test functions of a class or module
+---------------------------------------------
 
 As with all function :ref:`marking <mark>` you can skip test functions at the
-`whole class- or module level`_.  Here is an example
-for skipping all methods of a test class based on the platform::
+`whole class- or module level`_.  If your code targets python2.6 or above you 
+use the skipif decorator (and any other marker) on classes::
 
-    class TestPosixCalls:
-        pytestmark = pytest.mark.skipif("sys.platform == 'win32'")
-
-        def test_function(self):
-            "will not be setup or run under 'win32' platform"
-
-The ``pytestmark`` special name tells py.test to apply it to each test
-function in the class.  If your code targets python2.6 or above you can
-more naturally use the skipif decorator (and any other marker) on
-classes::
-
-    @pytest.mark.skipif("sys.platform == 'win32'")
+    @pytest.mark.skipif(sys.platform == 'win32', 
+                        reason="requires windows")
     class TestPosixCalls:
 
         def test_function(self):
             "will not be setup or run under 'win32' platform"
 
-Using multiple "skipif" decorators on a single function is generally fine - it means that if any of the conditions apply the function execution will be skipped.
+If the condition is true, this marker will produce a skip result for
+each of the test methods.
+
+If your code targets python2.5 where class-decorators are not available,
+you can set the ``pytestmark`` attribute of a class::
+
+    class TestPosixCalls:
+        pytestmark = pytest.mark.skipif(sys.platform == 'win32',
+                                        reason="requires Windows")
+
+        def test_function(self):
+            "will not be setup or run under 'win32' platform"
+
+As with the class-decorator, the ``pytestmark`` special name tells
+py.test to apply it to each test function in the class.  
+
+If you want to skip all test functions of a module, you must use
+the ``pytestmark`` name on the global level::
+
+    # test_module.py
+
+    pytestmark = pytest.mark.skipif(...)
+
+If multiple "skipif" decorators are applied to a test function, it
+will be skipped if any of the skip conditions is true.
 
 .. _`whole class- or module level`: mark.html#scoped-marking
 
@@ -118,7 +142,8 @@
 As with skipif_ you can also mark your expectation of a failure
 on a particular platform::
 
-    @pytest.mark.xfail("sys.version_info >= (3,0)")
+    @pytest.mark.xfail(sys.version_info >= (3,3), 
+                       reason="python3.3 api changes")
     def test_function():
         ...
 
@@ -151,41 +176,19 @@
     
     ======================== 6 xfailed in 0.05 seconds =========================
 
-.. _`evaluation of skipif/xfail conditions`:
-
-Evaluation of skipif/xfail expressions
-----------------------------------------------------
-
-.. versionadded:: 2.0.2
-
-The evaluation of a condition string in ``pytest.mark.skipif(conditionstring)``
-or ``pytest.mark.xfail(conditionstring)`` takes place in a namespace
-dictionary which is constructed as follows:
-
-* the namespace is initialized by putting the ``sys`` and ``os`` modules
-  and the pytest ``config`` object into it.
-  
-* updated with the module globals of the test function for which the
-  expression is applied.
-
-The pytest ``config`` object allows you to skip based on a test configuration value
-which you might have added::
-
-    @pytest.mark.skipif("not config.getvalue('db')")
-    def test_function(...):
-        ...
-
 
 Imperative xfail from within a test or setup function
 ------------------------------------------------------
 
-If you cannot declare xfail-conditions at import time
-you can also imperatively produce an XFail-outcome from
-within test or setup code.  Example::
+If you cannot declare xfail- of skipif conditions at import 
+time you can also imperatively produce an according outcome
+imperatively, in test or setup code::
 
     def test_function():
         if not valid_config():
-            pytest.xfail("unsupported configuration")
+            pytest.xfail("failing configuration (but should work)")
+            # or
+            pytest.skipif("unsupported configuration")
 
 
 Skipping on a missing import dependency
@@ -202,16 +205,61 @@
 
     docutils = pytest.importorskip("docutils", minversion="0.3")
 
-The version will be read from the specified module's ``__version__`` attribute.
+The version will be read from the specified 
+module's ``__version__`` attribute.
 
-Imperative skip from within a test or setup function
-------------------------------------------------------
 
-If for some reason you cannot declare skip-conditions
-you can also imperatively produce a skip-outcome from
-within test or setup code.  Example::
+.. _`string conditions`:
 
+specifying conditions as strings versus booleans
+----------------------------------------------------------
+
+Prior to pytest-2.4 the only way to specify skipif/xfail conditions was
+to use strings::
+
+    import sys
+    @pytest.mark.skipif("sys.version_info >= (3,3)")
     def test_function():
-        if not valid_config():
-            pytest.skip("unsupported configuration")
+        ...
 
+During test function setup the skipif condition is evaluated by calling 
+``eval('sys.version_info >= (3,0)', namespace)``.  The namespace contains 
+all the module globals, and ``os`` and ``sys`` as a minimum.
+
+Since pytest-2.4 `condition booleans`_ are considered preferable 
+because markers can then be freely imported between test modules.
+With strings you need to import not only the marker but all variables
+everything used by the marker, which violates encapsulation.
+
+The reason for specifying the condition as a string was that py.test can
+report a summary of skip conditions based purely on the condition string.  
+With conditions as booleans you are required to specify a ``reason`` string.  
+
+Note that string conditions will remain fully supported and you are free
+to use them if you have no need for cross-importing markers.
+
+The evaluation of a condition string in ``pytest.mark.skipif(conditionstring)``
+or ``pytest.mark.xfail(conditionstring)`` takes place in a namespace
+dictionary which is constructed as follows:
+
+* the namespace is initialized by putting the ``sys`` and ``os`` modules
+  and the pytest ``config`` object into it.
+  
+* updated with the module globals of the test function for which the
+  expression is applied.
+
+The pytest ``config`` object allows you to skip based on a test
+configuration value which you might have added::
+
+    @pytest.mark.skipif("not config.getvalue('db')")
+    def test_function(...):
+        ...
+
+The equivalent with "boolean conditions" is::
+
+    @pytest.mark.skipif(not pytest.config.getvalue("db"),
+                        reason="--db was not specified")
+    def test_function(...):
+        pass
+
+

diff -r 8dfa43a47ccac80dbf252d35c2156ae9e0d38637 -r 35e73638a6e4a1ae8b9b14237da4469960085d05 setup.py
--- a/setup.py
+++ b/setup.py
@@ -12,7 +12,7 @@
         name='pytest',
         description='py.test: simple powerful testing with Python',
         long_description = long_description,
-        version='2.3.6.dev3',
+        version='2.4.0.dev1',
         url='http://pytest.org',
         license='MIT license',
         platforms=['unix', 'linux', 'osx', 'cygwin', 'win32'],

diff -r 8dfa43a47ccac80dbf252d35c2156ae9e0d38637 -r 35e73638a6e4a1ae8b9b14237da4469960085d05 testing/test_skipping.py
--- a/testing/test_skipping.py
+++ b/testing/test_skipping.py
@@ -569,7 +569,6 @@
         "*xfail(*condition, reason=None, run=True)*expected failure*",
     ])
 
-
 def test_xfail_test_setup_exception(testdir):
     testdir.makeconftest("""
             def pytest_runtest_setup():
@@ -610,3 +609,44 @@
     """)
 
 
+class TestBooleanCondition:
+    def test_skipif(self, testdir):
+        testdir.makepyfile("""
+            import pytest
+            @pytest.mark.skipif(True, reason="True123")
+            def test_func1():
+                pass
+            @pytest.mark.skipif(False, reason="True123")
+            def test_func2():
+                pass
+        """)
+        result = testdir.runpytest()
+        result.stdout.fnmatch_lines("""
+            *1 passed*1 skipped*
+        """)
+
+    def test_skipif_noreason(self, testdir):
+        testdir.makepyfile("""
+            import pytest
+            @pytest.mark.skipif(True)
+            def test_func():
+                pass
+        """)
+        result = testdir.runpytest("-rs")
+        result.stdout.fnmatch_lines("""
+            *1 error*
+        """)
+
+    def test_xfail(self, testdir):
+        testdir.makepyfile("""
+            import pytest
+            @pytest.mark.xfail(True, reason="True123")
+            def test_func():
+                assert 0
+        """)
+        result = testdir.runpytest("-rxs")
+        result.stdout.fnmatch_lines("""
+            *XFAIL*
+            *True123*
+            *1 xfail*
+        """)


https://bitbucket.org/hpk42/pytest/commits/cccbdc95b3fa/
Changeset:   cccbdc95b3fa
User:        hpk42
Date:        2013-05-07 21:34:59
Summary:     enhance index page, fix announcement index
Affected #:  3 files

diff -r 35e73638a6e4a1ae8b9b14237da4469960085d05 -r cccbdc95b3fa4b1a111449fdcef7319b34502e76 doc/en/announce/index.txt
--- a/doc/en/announce/index.txt
+++ b/doc/en/announce/index.txt
@@ -5,6 +5,7 @@
 .. toctree::
    :maxdepth: 2
 
+   release-2.3.5
    release-2.3.4
    release-2.3.3
    release-2.3.2

diff -r 35e73638a6e4a1ae8b9b14237da4469960085d05 -r cccbdc95b3fa4b1a111449fdcef7319b34502e76 doc/en/index.txt
--- a/doc/en/index.txt
+++ b/doc/en/index.txt
@@ -11,8 +11,11 @@
 
  - runs on Posix/Windows, Python 2.4-3.3, PyPy and Jython-2.5.1
  - :ref:`comprehensive online <toc>` and `PDF documentation <pytest.pdf>`_
+ - many :ref:`third party plugins <extplugins>` and 
+   :ref:`builtin helpers <pytest helpers>`
  - used in :ref:`many projects and organisations <projects>`, in test
-   suites ranging from 10 to 10s of thousands of tests
+   suites with up to twenty thousand tests
+ - strict policy of remaining backward compatible across releases
  - comes with many :ref:`tested examples <examples>`
 
 **provides easy no-boilerplate testing**
@@ -26,13 +29,13 @@
 
 **scales from simple unit to complex functional testing**
 
- - (new in 2.3) :ref:`modular parametrizeable fixtures <fixture>`
+ - :ref:`modular parametrizeable fixtures <fixture>` (new in 2.3,
+   improved in 2.4)
  - :ref:`parametrized test functions <parametrized test functions>`
  - :ref:`mark`
- - :ref:`skipping`
+ - :ref:`skipping` (improved in 2.4)
  - can :ref:`distribute tests to multiple CPUs <xdistcpu>` through :ref:`xdist plugin <xdist>`
  - can :ref:`continuously re-run failing tests <looponfailing>`
- - many :ref:`builtin helpers <pytest helpers>` and :ref:`plugins <plugins>`
  - flexible :ref:`Python test discovery`
 
 **integrates many common testing methods**:
@@ -50,8 +53,8 @@
 **extensive plugin and customization system**:
 
  - all collection, reporting, running aspects are delegated to hook functions
- - customizations can be per-directory, per-project or per PyPI released plugins
- - it is easy to add command line options or do other kind of add-ons and customizations.
+ - customizations can be per-directory, per-project or per PyPI released plugin
+ - it is easy to add command line options or customize existing behaviour
 
 .. _`Javascript unit- and functional testing`: http://pypi.python.org/pypi/oejskit
 

diff -r 35e73638a6e4a1ae8b9b14237da4469960085d05 -r cccbdc95b3fa4b1a111449fdcef7319b34502e76 doc/en/plugins.txt
--- a/doc/en/plugins.txt
+++ b/doc/en/plugins.txt
@@ -78,12 +78,22 @@
 * `pytest-capturelog <http://pypi.python.org/pypi/pytest-capturelog>`_:
   to capture and assert about messages from the logging module
 
+* `pytest-cov <http://pypi.python.org/pypi/pytest-cov>`_:
+  coverage reporting, compatible with distributed testing
+
 * `pytest-xdist <http://pypi.python.org/pypi/pytest-xdist>`_:
   to distribute tests to CPUs and remote hosts, to run in boxed
   mode which allows to survive segmentation faults, to run in
   looponfailing mode, automatically re-running failing tests 
   on file changes, see also :ref:`xdist`
 
+* `pytest-instafail <http://pypi.python.org/pypi/pytest-instafail>`_:
+  to report failures while the test run is happening.
+
+* `pytest-bdd <http://pypi.python.org/pypi/pytest-bdd>`_ and
+  `pytest-konira <http://pypi.python.org/pypi/pytest-konira>`_ 
+  to write tests using behaviour-driven testing.
+
 * `pytest-timeout <http://pypi.python.org/pypi/pytest-timeout>`_:
   to timeout tests based on function marks or global definitions.
 
@@ -91,9 +101,6 @@
   to interactively re-run failing tests and help other plugins to
   store test run information across invocations.
 
-* `pytest-cov <http://pypi.python.org/pypi/pytest-cov>`_:
-  coverage reporting, compatible with distributed testing
-
 * `pytest-pep8 <http://pypi.python.org/pypi/pytest-pep8>`_:
   a ``--pep8`` option to enable PEP8 compliance checking.
 


https://bitbucket.org/hpk42/pytest/commits/5db7ac12892a/
Changeset:   5db7ac12892a
User:        hpk42
Date:        2013-05-07 21:37:08
Summary:     document context fixtures, also improve plugin docs
Affected #:  2 files

diff -r cccbdc95b3fa4b1a111449fdcef7319b34502e76 -r 5db7ac12892adca887ce08cbf01f5683dc24a75f CHANGELOG
--- a/CHANGELOG
+++ b/CHANGELOG
@@ -3,7 +3,7 @@
 
 - (experimental) allow fixture functions to be 
   implemented as context managers.  Thanks Andreas Pelme,
-  Vladimir Keleshev.
+  Vladimir Keleshev. 
 
 - (experimental) allow boolean expression directly with skipif/xfail
   if a "reason" is also specified.  Rework skipping documentation

diff -r cccbdc95b3fa4b1a111449fdcef7319b34502e76 -r 5db7ac12892adca887ce08cbf01f5683dc24a75f doc/en/fixture.txt
--- a/doc/en/fixture.txt
+++ b/doc/en/fixture.txt
@@ -7,14 +7,14 @@
 
 .. currentmodule:: _pytest.python
 
-.. versionadded:: 2.0/2.3
+.. versionadded:: 2.0/2.3/2.4
 
 .. _`xUnit`: http://en.wikipedia.org/wiki/XUnit
-.. _`general purpose of test fixtures`: http://en.wikipedia.org/wiki/Test_fixture#Software
+.. _`purpose of test fixtures`: http://en.wikipedia.org/wiki/Test_fixture#Software
 .. _`Dependency injection`: http://en.wikipedia.org/wiki/Dependency_injection#Definition
 
-The `general purpose of test fixtures`_ is to provide a fixed baseline
-upon which tests can reliably and repeatedly execute.   pytest-2.3 fixtures
+The `purpose of test fixtures`_ is to provide a fixed baseline
+upon which tests can reliably and repeatedly execute.   pytest fixtures
 offer dramatic improvements over the classic xUnit style of setup/teardown 
 functions:
 
@@ -22,8 +22,7 @@
   from test functions, modules, classes or whole projects.
 
 * fixtures are implemented in a modular manner, as each fixture name
-  triggers a *fixture function* which can itself easily use other 
-  fixtures.
+  triggers a *fixture function* which can itself use other fixtures.
 
 * fixture management scales from simple unit to complex
   functional testing, allowing to parametrize fixtures and tests according
@@ -129,10 +128,10 @@
 
 When injecting fixtures to test functions, pytest-2.0 introduced the
 term "funcargs" or "funcarg mechanism" which continues to be present
-also in pytest-2.3 docs.  It now refers to the specific case of injecting
+also in docs today.  It now refers to the specific case of injecting
 fixture values as arguments to test functions.  With pytest-2.3 there are
-more possibilities to use fixtures but "funcargs" probably will remain
-as the main way of dealing with fixtures.
+more possibilities to use fixtures but "funcargs" remain as the main way
+as they allow to directly state the dependencies of a test function.
 
 As the following examples show in more detail, funcargs allow test
 functions to easily receive and work against specific pre-initialized
@@ -154,10 +153,10 @@
 :py:func:`@pytest.fixture <_pytest.python.fixture>` invocation
 to cause the decorated ``smtp`` fixture function to only be invoked once 
 per test module.  Multiple test functions in a test module will thus
-each receive the same ``smtp`` fixture instance.  The next example also
-extracts the fixture function into a separate ``conftest.py`` file so
-that all tests in test modules in the directory can access the fixture
-function::
+each receive the same ``smtp`` fixture instance.  The next example puts
+the fixture function into a separate ``conftest.py`` file so
+that tests from multiple test modules in the directory can 
+access the fixture function::
 
     # content of conftest.py
     import pytest
@@ -233,24 +232,91 @@
     def smtp(...):
         # the returned fixture value will be shared for 
         # all tests needing it
+
+.. _`contextfixtures`:
         
+fixture finalization / teardowns 
+-------------------------------------------------------------
+
+pytest supports two styles of fixture finalization:
+
+- (new in pytest-2.4) by writing a contextmanager fixture
+  generator where a fixture value is "yielded" and the remainder 
+  of the function serves as the teardown code.  This integrates
+  very well with existing context managers.
+
+- by making a fixture function accept a ``request`` argument
+  with which it can call ``request.addfinalizer(teardownfunction)`` 
+  to register a teardown callback function.
+
+Both methods are strictly equivalent from pytest's view and will
+remain supported in the future.
+
+Because a number of people prefer the new contextmanager style 
+we describe it first::
+
+    # content of test_ctxfixture.py
+    
+    import smtplib
+    import pytest
+
+    @pytest.fixture(scope="module")
+    def smtp():
+        smtp = smtplib.SMTP("merlinux.eu")
+        yield smtp  # provide the fixture value
+        print ("teardown smtp")
+        smtp.close()
+
+pytest detects that you are using a ``yield`` in your fixture function,
+turns it into a generator and:
+
+a) iterates once into it for producing the value
+b) iterates a second time for tearing the fixture down, expecting
+   a StopIteration (which is produced automatically from the Python 
+   runtime when the generator returns).
+
+.. note::
+
+    The teardown will execute independently of the status of test functions.
+    You do not need to write the teardown code into a ``try-finally`` clause
+    like you would usually do with ``contextlib.contextmanager`` decorated
+    functions.
+
+    If the fixture generator yields a second value pytest will report 
+    an error.  Yielding cannot be used for parametrization.  We'll describe 
+    ways to implement parametrization further below.
+
+Prior to pytest-2.4 you always needed to register a finalizer by accepting
+a ``request`` object into your fixture function and calling
+``request.addfinalizer`` with a teardown function::
+
+    import smtplib
+    import pytest
+
+    @pytest.fixture(scope="module")
+    def smtp(request):
+        smtp = smtplib.SMTP("merlinux.eu")
+        def fin():
+            print ("teardown smtp")
+            smtp.close()
+        return smtp  # provide the fixture value
+
+This method of registering a finalizer reads more indirect
+than the new contextmanager style syntax because ``fin`` 
+is a callback function.  
+
+
 .. _`request-context`:
 
 Fixtures can interact with the requesting test context
 -------------------------------------------------------------
 
-Fixture functions can themselves use other fixtures by naming
-them as an input argument just like test functions do, see 
-:ref:`interdependent fixtures`.  Moreover, pytest 
-provides a builtin :py:class:`request <FixtureRequest>` object,
+pytest provides a builtin :py:class:`request <FixtureRequest>` object,
 which fixture functions can use to introspect the function, class or module
-for which they are invoked or to register finalizing (cleanup)
-functions which are called when the last test finished execution.  
+for which they are invoked.
 
 Further extending the previous ``smtp`` fixture example, let's  
-read an optional server URL from the module namespace and register 
-a finalizer that closes the smtp connection after the last
-test in a module finished execution::
+read an optional server URL from the module namespace::
 
     # content of conftest.py
     import pytest
@@ -260,26 +326,25 @@
     def smtp(request):
         server = getattr(request.module, "smtpserver", "merlinux.eu")
         smtp = smtplib.SMTP(server)
-        def fin():
-            print ("finalizing %s" % smtp)
-            smtp.close()
-        request.addfinalizer(fin)
-        return smtp
+        yield smtp  # provide the fixture 
+        print ("finalizing %s" % smtp)
+        smtp.close()
 
-The registered ``fin`` function will be called when the last test
-using it has executed::
+The finalizing part after the ``yield smtp`` statement will execute
+when the last test using the ``smtp`` fixture has executed::
 
     $ py.test -s -q --tb=no
     FF
     finalizing <smtplib.SMTP instance at 0x1e10248>
 
 We see that the ``smtp`` instance is finalized after the two
-tests using it tests executed.  If we had specified ``scope='function'`` 
-then fixture setup and cleanup would occur around each single test. 
-Note that either case the test module itself does not need to change!
+tests which use it finished executin.  If we rather specify 
+``scope='function'`` then fixture setup and cleanup occurs 
+around each single test.  Note that in either case the test 
+module itself does not need to change!  
 
 Let's quickly create another test module that actually sets the
-server URL and has a test to verify the fixture picks it up::
+server URL in its module namespace::
     
     # content of test_anothersmtp.py
     
@@ -298,6 +363,10 @@
     >       assert 0, smtp.helo()
     E       AssertionError: (250, 'mail.python.org')
 
+voila! The ``smtp`` fixture function picked up our mail server name
+from the module namespace.
+
+
 .. _`fixture-parametrize`:
 
 Parametrizing a fixture
@@ -323,11 +392,9 @@
                     params=["merlinux.eu", "mail.python.org"])
     def smtp(request):
         smtp = smtplib.SMTP(request.param)
-        def fin():
-            print ("finalizing %s" % smtp)
-            smtp.close()
-        request.addfinalizer(fin)
-        return smtp
+        yield smtp
+        print ("finalizing %s" % smtp)
+        smtp.close()
 
 The main change is the declaration of ``params`` with 
 :py:func:`@pytest.fixture <_pytest.python.fixture>`, a list of values
@@ -467,10 +534,8 @@
     def modarg(request):
         param = request.param
         print "create", param
-        def fin():
-            print "fin", param
-        request.addfinalizer(fin)
-        return param
+        yield param
+        print ("fin %s" % param)
 
     @pytest.fixture(scope="function", params=[1,2])
     def otherarg(request):
@@ -517,9 +582,9 @@
 an ordering of test execution that lead to the fewest possible "active" resources. The finalizer for the ``mod1`` parametrized resource was executed 
 before the ``mod2`` resource was setup.
 
+
 .. _`usefixtures`:
 
-
 using fixtures from classes, modules or projects
 ----------------------------------------------------------------------
 


https://bitbucket.org/hpk42/pytest/commits/17f3904c6e8e/
Changeset:   17f3904c6e8e
User:        hpk42
Date:        2013-05-07 21:39:30
Summary:     bump version
Affected #:  2 files
Diff not available.

Repository URL: https://bitbucket.org/hpk42/pytest/

--

This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.


More information about the pytest-commit mailing list