From builds at drone.io Mon Jun 1 13:53:13 2015 From: builds at drone.io (Drone.io Build) Date: Mon, 01 Jun 2015 11:53:13 +0000 Subject: [Pytest-commit] [FAIL] pytest-pep8 - # 7 Message-ID: <20150601115313.27652.22900@drone.io> Build Failed Build : https://drone.io/bitbucket.org/pytest-dev/pytest-pep8/7 Project : https://drone.io/bitbucket.org/pytest-dev/pytest-pep8 Repository : https://bitbucket.org/pytest-dev/pytest-pep8 Version : 52:3b320e788c67 Author : holger krekel Branch : default Message: Merged in The-Compiler/pytest-pep8-1/The-Compiler/fix-repo-link-in-readme-1433121361107 (pull request #10) -------------- next part -------------- An HTML attachment was scrubbed... URL: From builds at drone.io Mon Jun 1 14:03:00 2015 From: builds at drone.io (Drone.io Build) Date: Mon, 01 Jun 2015 12:03:00 +0000 Subject: [Pytest-commit] [FAIL] pytest-pep8 - # 6 Message-ID: <20150601115236.89793.53406@drone.io> Build Failed Build : https://drone.io/bitbucket.org/pytest-dev/pytest-pep8/6 Project : https://drone.io/bitbucket.org/pytest-dev/pytest-pep8 Repository : https://bitbucket.org/pytest-dev/pytest-pep8 Version : 50:cd36f611ba10 Author : holger krekel Branch : default Message: Merged in The-Compiler/pytest-pep8/The-Compiler/fix-repo-link-in-setuppy-1433121197611 (pull request #9) -------------- next part -------------- An HTML attachment was scrubbed... URL: From issues-reply at bitbucket.org Tue Jun 2 15:13:36 2015 From: issues-reply at bitbucket.org (Graham Markall) Date: Tue, 02 Jun 2015 13:13:36 -0000 Subject: [Pytest-commit] Issue #761: Capturing stdout on Windows seems to cause lockup and 100% CPU usage (pytest-dev/pytest) Message-ID: <20150602131336.5099.62216@app08.ash-private.bitbucket.org> New issue 761: Capturing stdout on Windows seems to cause lockup and 100% CPU usage https://bitbucket.org/pytest-dev/pytest/issue/761/capturing-stdout-on-windows-seems-to-cause Graham Markall: On Windows 7 with Python 3.4.3 capturing stdout seems to keep hold of stdout after `pytest.main` is finished, and sends CPU usage to 100%. For example, I created a file, m.py, which containts only: ``` #!python print('Test') ``` if I then run pytest with `pytest.main()` and allow it to capture stdout, like: ``` C:\Users\gmarkall\tmp\pytest>python Python 3.4.3 |Continuum Analytics, Inc.| (default, Mar 6 2015, 12:06:10) [MSC v.1600 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import pytest >>> pytest.main('--pyargs m') ============================= test session starts ============================= platform win32 -- Python 3.4.3 -- py-1.4.27 -- pytest-2.7.1 rootdir: C:\Users\gmarkall\tmp\pytest, inifile: collected 0 items ============================== in 0.00 seconds =============================== 0 >>> ``` then at this point I can no longer enter any input into the interpreter (though pressing Ctrl+C does cause `KeyboardInterrupt` to pop up). However, invoking with `-s` leaves everything working as expected: ``` C:\Users\gmarkall\tmp\pytest>python Python 3.4.3 |Continuum Analytics, Inc.| (default, Mar 6 2015, 12:06:10) [MSC v.1600 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import pytest >>> pytest.main('-s --pyargs m') ============================= test session starts ============================= platform win32 -- Python 3.4.3 -- py-1.4.27 -- pytest-2.7.1 rootdir: C:\Users\gmarkall\tmp\pytest, inifile: collecting 0 itemsTest collected 0 items ============================== in 0.02 seconds =============================== 0 >>> ``` and at this point I can still interact with the Python interpreter. From issues-reply at bitbucket.org Tue Jun 2 17:56:50 2015 From: issues-reply at bitbucket.org (caryoscelus) Date: Tue, 02 Jun 2015 15:56:50 -0000 Subject: [Pytest-commit] Issue #762: py.test doesn't reimport changed modules (pytest-dev/pytest) Message-ID: <20150602155650.31231.31717@app01.ash-private.bitbucket.org> New issue 762: py.test doesn't reimport changed modules https://bitbucket.org/pytest-dev/pytest/issue/762/pytest-doesnt-reimport-changed-modules caryoscelus: Test case: ``` #!python #module.py def fail(): raise Exception('fail') ``` ``` #!python #test_a.py import module def do_nothing(): pass module.fail = do_nothing def test_nofail(): module.fail() ``` ``` #!python #test_b.py import module import pytest def test_fail(): with pytest.raises(Exception): module.fail() ``` ```pytest test_b.py``` runs fine, but just ```pytest``` fails From issues-reply at bitbucket.org Wed Jun 3 12:51:01 2015 From: issues-reply at bitbucket.org (saaj) Date: Wed, 03 Jun 2015 10:51:01 -0000 Subject: [Pytest-commit] Issue #254: Tox >= 2 fails with UnicodeDecodeError trying to read project's readme (hpk42/tox) Message-ID: <20150603105101.17417.68188@app08.ash-private.bitbucket.org> New issue 254: Tox >= 2 fails with UnicodeDecodeError trying to read project's readme https://bitbucket.org/hpk42/tox/issue/254/tox-2-fails-with-unicodedecodeerror-trying saaj: A CI build failed on a minor documentation update in py3 envs with new Tox installed. Tox < 2 is fine. Here's the stack trace: ``` Processing ./.tox/dist/HermesCache-0.5.2.zip Complete output from command python setup.py egg_info: Traceback (most recent call last): File "", line 20, in File "/tmp/pip-um4zde-build/setup.py", line 23, in long_description = open('README.txt').read(), File "/home/ubuntu/src/bitbucket.org/saaj/hermes/.tox/py33-pylibmc/lib/python3.3/encodings/ascii.py", line 26, in decode return codecs.ascii_decode(input, self.errors)[0] UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 2413: ordinal not in range(128) ``` Here's CI build [link](https://drone.io/bitbucket.org/saaj/hermes/20). It's kind of strange, because I can open python3 in terminal and `open('README.txt').read()` is fine. Pip version is 7.0.3, and 1.4.1 on CI server. It seems the result is the same. According to the versions it's also likely a bug that download cache deprecation warning is shown disregarding installed version of pip. > DEPRECATION: --download-cache has been deprecated and will be removed in the future. Pip now automatically uses and configures its cache. From commits-noreply at bitbucket.org Wed Jun 3 13:25:46 2015 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Wed, 03 Jun 2015 11:25:46 -0000 Subject: [Pytest-commit] commit/tox: aconrad: pass LANG variable to the test environment Message-ID: <20150603112546.28208.37881@app09.ash-private.bitbucket.org> 1 new commit in tox: https://bitbucket.org/hpk42/tox/commits/c1c7775d628e/ Changeset: c1c7775d628e User: aconrad Date: 2015-06-01 18:28:43+00:00 Summary: pass LANG variable to the test environment Affected #: 3 files diff -r 7b318dbb8376844d469588c1fc8a100f389e513d -r c1c7775d628e30e051a849fd775b8e56f4a168e5 CHANGELOG --- a/CHANGELOG +++ b/CHANGELOG @@ -1,3 +1,9 @@ +unreleased +---------- + +- tox now passes the LANG variable from the tox invocation environment to the + test environment. + 2.0.1 ----------- diff -r 7b318dbb8376844d469588c1fc8a100f389e513d -r c1c7775d628e30e051a849fd775b8e56f4a168e5 tests/test_config.py --- a/tests/test_config.py +++ b/tests/test_config.py @@ -716,6 +716,7 @@ assert "TMPDIR" in envconfig.passenv assert "PATH" in envconfig.passenv assert "PIP_INDEX_URL" in envconfig.passenv + assert "LANG" in envconfig.passenv assert "A123A" in envconfig.passenv assert "A123B" in envconfig.passenv diff -r 7b318dbb8376844d469588c1fc8a100f389e513d -r c1c7775d628e30e051a849fd775b8e56f4a168e5 tox/config.py --- a/tox/config.py +++ b/tox/config.py @@ -387,7 +387,7 @@ help="list of X=Y lines with environment variable settings") def passenv(testenv_config, value): - passenv = set(["PATH", "PIP_INDEX_URL"]) + passenv = set(["PATH", "PIP_INDEX_URL", "LANG"]) # we ensure that tmp directory settings are passed on # we could also set it to the per-venv "envtmpdir" Repository URL: https://bitbucket.org/hpk42/tox/ -- This is a commit notification from bitbucket.org. You are receiving this because you have the service enabled, addressing the recipient of this email. From commits-noreply at bitbucket.org Wed Jun 3 13:29:42 2015 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Wed, 03 Jun 2015 11:29:42 -0000 Subject: [Pytest-commit] commit/tox: hpk42: add SYSTEMDRIVE into default passenv on windows to allow pip6 to work. Message-ID: <20150603112942.9961.83759@app12.ash-private.bitbucket.org> 1 new commit in tox: https://bitbucket.org/hpk42/tox/commits/ea5e249649c3/ Changeset: ea5e249649c3 User: hpk42 Date: 2015-06-03 11:29:33+00:00 Summary: add SYSTEMDRIVE into default passenv on windows to allow pip6 to work. Thanks Michael Krause. Affected #: 5 files diff -r c1c7775d628e30e051a849fd775b8e56f4a168e5 -r ea5e249649c30bd6bbf68b45c6b3a6f71149ddb8 CHANGELOG --- a/CHANGELOG +++ b/CHANGELOG @@ -1,8 +1,12 @@ -unreleased +2.0.2 ---------- -- tox now passes the LANG variable from the tox invocation environment to the - test environment. +- fix issue247: tox now passes the LANG variable from the tox invocation + environment to the test environment by default. + +- add SYSTEMDRIVE into default passenv on windows to allow pip6 to work. + Thanks Michael Krause. + 2.0.1 ----------- diff -r c1c7775d628e30e051a849fd775b8e56f4a168e5 -r ea5e249649c30bd6bbf68b45c6b3a6f71149ddb8 setup.py --- a/setup.py +++ b/setup.py @@ -48,7 +48,7 @@ description='virtualenv-based automation of test activities', long_description=open("README.rst").read(), url='http://tox.testrun.org/', - version='2.0.1', + version='2.0.2', license='http://opensource.org/licenses/MIT', platforms=['unix', 'linux', 'osx', 'cygwin', 'win32'], author='holger krekel', diff -r c1c7775d628e30e051a849fd775b8e56f4a168e5 -r ea5e249649c30bd6bbf68b45c6b3a6f71149ddb8 tests/test_config.py --- a/tests/test_config.py +++ b/tests/test_config.py @@ -709,6 +709,7 @@ envconfig = config.envconfigs['python'] if plat == "win32": assert "PATHEXT" in envconfig.passenv + assert "SYSTEMDRIVE" in envconfig.passenv assert "SYSTEMROOT" in envconfig.passenv assert "TEMP" in envconfig.passenv assert "TMP" in envconfig.passenv diff -r c1c7775d628e30e051a849fd775b8e56f4a168e5 -r ea5e249649c30bd6bbf68b45c6b3a6f71149ddb8 tox/__init__.py --- a/tox/__init__.py +++ b/tox/__init__.py @@ -1,5 +1,5 @@ # -__version__ = '2.0.1' +__version__ = '2.0.2' from .hookspecs import hookspec, hookimpl # noqa diff -r c1c7775d628e30e051a849fd775b8e56f4a168e5 -r ea5e249649c30bd6bbf68b45c6b3a6f71149ddb8 tox/config.py --- a/tox/config.py +++ b/tox/config.py @@ -394,6 +394,7 @@ # but this leads to very long paths when run with jenkins # so we just pass it on by default for now. if sys.platform == "win32": + passenv.add("SYSTEMDRIVE") # needed for pip6 passenv.add("SYSTEMROOT") # needed for python's crypto module passenv.add("PATHEXT") # needed for discovering executables passenv.add("TEMP") Repository URL: https://bitbucket.org/hpk42/tox/ -- This is a commit notification from bitbucket.org. You are receiving this because you have the service enabled, addressing the recipient of this email. From commits-noreply at bitbucket.org Wed Jun 3 15:17:50 2015 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Wed, 03 Jun 2015 13:17:50 -0000 Subject: [Pytest-commit] commit/tox: 2 new changesets Message-ID: <20150603131750.26922.93747@app14.ash-private.bitbucket.org> 2 new commits in tox: https://bitbucket.org/hpk42/tox/commits/82561ff2cbf4/ Changeset: 82561ff2cbf4 User: hpk42 Date: 2015-06-03 13:14:20+00:00 Summary: fix pep8 issue Affected #: 1 file diff -r ea5e249649c30bd6bbf68b45c6b3a6f71149ddb8 -r 82561ff2cbf48d8bf5be1384f5f3bd04c805fd30 tox/config.py --- a/tox/config.py +++ b/tox/config.py @@ -394,9 +394,9 @@ # but this leads to very long paths when run with jenkins # so we just pass it on by default for now. if sys.platform == "win32": - passenv.add("SYSTEMDRIVE") # needed for pip6 - passenv.add("SYSTEMROOT") # needed for python's crypto module - passenv.add("PATHEXT") # needed for discovering executables + passenv.add("SYSTEMDRIVE") # needed for pip6 + passenv.add("SYSTEMROOT") # needed for python's crypto module + passenv.add("PATHEXT") # needed for discovering executables passenv.add("TEMP") passenv.add("TMP") else: https://bitbucket.org/hpk42/tox/commits/93f262ca702e/ Changeset: 93f262ca702e User: hpk42 Date: 2015-06-03 13:14:25+00:00 Summary: Added tag 2.0.2 for changeset 82561ff2cbf4 Affected #: 1 file diff -r 82561ff2cbf48d8bf5be1384f5f3bd04c805fd30 -r 93f262ca702e8d2e1192875ded9eaa584de3aa00 .hgtags --- a/.hgtags +++ b/.hgtags @@ -25,3 +25,4 @@ 452288d6c50042ccfc1c944b24f4eb47df8f6823 1.9.2 b7e498efd0ecd543a870431ea8d34f2882d5ace8 2.0.0 2897c9e3a019ee29948cbeda319ffac0e6902053 2.0.1 +82561ff2cbf48d8bf5be1384f5f3bd04c805fd30 2.0.2 Repository URL: https://bitbucket.org/hpk42/tox/ -- This is a commit notification from bitbucket.org. You are receiving this because you have the service enabled, addressing the recipient of this email. From issues-reply at bitbucket.org Wed Jun 3 18:45:23 2015 From: issues-reply at bitbucket.org (Felix Yan) Date: Wed, 03 Jun 2015 16:45:23 -0000 Subject: [Pytest-commit] Issue #255: Tests trying to write into '/.cache' and failed (hpk42/tox) Message-ID: <20150603164523.10521.76474@app14.ash-private.bitbucket.org> New issue 255: Tests trying to write into '/.cache' and failed https://bitbucket.org/hpk42/tox/issue/255/tests-trying-to-write-into-cache-and Felix Yan: I am getting test failures when building in a chroot, full logs: https://paste.xinu.at/GrF/ I did not get it at the time when tox 2.0.1 was released, but as you can see in the logs, 2.0.1 fails now. I'm still trying to figure out what changed during the time (maybe due to a pip upgrade or something?). I have also tried to specify HOME but that didn't help. From issues-reply at bitbucket.org Wed Jun 3 20:19:33 2015 From: issues-reply at bitbucket.org (Amith Reddy Ravuru) Date: Wed, 03 Jun 2015 18:19:33 -0000 Subject: [Pytest-commit] Issue #763: Add option to generate HTML Report (pytest-dev/pytest) Message-ID: <20150603181933.11055.36363@app14.ash-private.bitbucket.org> New issue 763: Add option to generate HTML Report https://bitbucket.org/pytest-dev/pytest/issue/763/add-option-to-generate-html-report Amith Reddy Ravuru: Most of the teams would ideally require html reports once the tests are run. It would be really helpful to have html report generation option as part of pytest. Additionally, it can take html template file as input. User can control the format of the html report using this template file. Expose as many parameters as possible to html template file like Test start time, Test Duration, Test case output, test case statistics etc. 3 options: --with-html => Enable HTML --html-report-file => HTML Report --html-template-file => HTML Template From commits-noreply at bitbucket.org Wed Jun 3 23:33:57 2015 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Wed, 03 Jun 2015 21:33:57 -0000 Subject: [Pytest-commit] commit/pytest: flub: Use platform.python_version() to show Python version number Message-ID: <20150603213357.15087.75126@app08.ash-private.bitbucket.org> 1 new commit in pytest: https://bitbucket.org/pytest-dev/pytest/commits/88488f8bb2eb/ Changeset: 88488f8bb2eb User: flub Date: 2015-05-31 19:31:31+00:00 Summary: Use platform.python_version() to show Python version number This results in something like "3.5.0b2" for non-final releases while still being "3.5.0" for final releases. Affected #: 1 file diff -r 1a913c55c66d6841c113df2247942433e693ce0f -r 88488f8bb2eb46b19c67d595b1d2e220e89c418a _pytest/terminal.py --- a/_pytest/terminal.py +++ b/_pytest/terminal.py @@ -7,6 +7,7 @@ import py import sys import time +import platform def pytest_addoption(parser): @@ -274,7 +275,7 @@ if not self.showheader: return self.write_sep("=", "test session starts", bold=True) - verinfo = ".".join(map(str, sys.version_info[:3])) + verinfo = platform.python_version() msg = "platform %s -- Python %s" % (sys.platform, verinfo) if hasattr(sys, 'pypy_version_info'): verinfo = ".".join(map(str, sys.pypy_version_info[:3])) Repository URL: https://bitbucket.org/pytest-dev/pytest/ -- This is a commit notification from bitbucket.org. You are receiving this because you have the service enabled, addressing the recipient of this email. From commits-noreply at bitbucket.org Wed Jun 3 23:34:18 2015 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Wed, 03 Jun 2015 21:34:18 -0000 Subject: [Pytest-commit] commit/pytest: 2 new changesets Message-ID: <20150603213418.21667.80735@app12.ash-private.bitbucket.org> 2 new commits in pytest: https://bitbucket.org/pytest-dev/pytest/commits/937947dc19ac/ Changeset: 937947dc19ac User: flub Date: 2015-05-20 09:33:13+00:00 Summary: Ignore emacs' .dir-locals.el file This can be used to e.g. automatically set a specific syntax checker or virtualenv to activate. Affected #: 1 file diff -r 6ee4b4d70a9b81f525c6fe46eab1b2018de6f4b0 -r 937947dc19acc8123f85dfe55707cca042ac9089 .hgignore --- a/.hgignore +++ b/.hgignore @@ -37,3 +37,4 @@ .coverage .ropeproject *.sublime-* +.dir-locals.el https://bitbucket.org/pytest-dev/pytest/commits/84621bcbb544/ Changeset: 84621bcbb544 User: hpk42 Date: 2015-06-03 21:34:13+00:00 Summary: Merged in flub/pytest (pull request #299) Ignore emacs' .dir-locals.el file Affected #: 1 file diff -r 88488f8bb2eb46b19c67d595b1d2e220e89c418a -r 84621bcbb544f251f2aac602c07b68797e70d51d .hgignore --- a/.hgignore +++ b/.hgignore @@ -37,3 +37,4 @@ .coverage .ropeproject *.sublime-* +.dir-locals.el Repository URL: https://bitbucket.org/pytest-dev/pytest/ -- This is a commit notification from bitbucket.org. You are receiving this because you have the service enabled, addressing the recipient of this email. From commits-noreply at bitbucket.org Wed Jun 3 23:34:18 2015 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Wed, 03 Jun 2015 21:34:18 -0000 Subject: [Pytest-commit] commit/pytest: hpk42: Merged in flub/pytest (pull request #299) Message-ID: <20150603213418.11940.9725@app10.ash-private.bitbucket.org> 1 new commit in pytest: https://bitbucket.org/pytest-dev/pytest/commits/84621bcbb544/ Changeset: 84621bcbb544 User: hpk42 Date: 2015-06-03 21:34:13+00:00 Summary: Merged in flub/pytest (pull request #299) Ignore emacs' .dir-locals.el file Affected #: 1 file diff -r 88488f8bb2eb46b19c67d595b1d2e220e89c418a -r 84621bcbb544f251f2aac602c07b68797e70d51d .hgignore --- a/.hgignore +++ b/.hgignore @@ -37,3 +37,4 @@ .coverage .ropeproject *.sublime-* +.dir-locals.el Repository URL: https://bitbucket.org/pytest-dev/pytest/ -- This is a commit notification from bitbucket.org. You are receiving this because you have the service enabled, addressing the recipient of this email. From commits-noreply at bitbucket.org Wed Jun 3 23:34:41 2015 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Wed, 03 Jun 2015 21:34:41 -0000 Subject: [Pytest-commit] commit/pytest: hpk42: Close branch pytest-2.7 Message-ID: <20150603213441.9399.91300@app10.ash-private.bitbucket.org> 1 new commit in pytest: https://bitbucket.org/pytest-dev/pytest/commits/2b8aadc60665/ Changeset: 2b8aadc60665 Branch: pytest-2.7 User: hpk42 Date: 2015-06-03 21:34:38+00:00 Summary: Close branch pytest-2.7 Affected #: 0 files Repository URL: https://bitbucket.org/pytest-dev/pytest/ -- This is a commit notification from bitbucket.org. You are receiving this because you have the service enabled, addressing the recipient of this email. From commits-noreply at bitbucket.org Wed Jun 3 23:37:30 2015 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Wed, 03 Jun 2015 21:37:30 -0000 Subject: [Pytest-commit] commit/pytest: 2 new changesets Message-ID: <20150603213730.26186.31148@app13.ash-private.bitbucket.org> 2 new commits in pytest: https://bitbucket.org/pytest-dev/pytest/commits/eea929322b9d/ Changeset: eea929322b9d User: The-Compiler Date: 2015-05-19 20:59:49+00:00 Summary: Fix monkeypatch.setenv with string and raising=False. Fixes #746. Affected #: 2 files diff -r 6ee4b4d70a9b81f525c6fe46eab1b2018de6f4b0 -r eea929322b9dd2d0a19892ac29e2b80fc38617c5 _pytest/monkeypatch.py --- a/_pytest/monkeypatch.py +++ b/_pytest/monkeypatch.py @@ -27,7 +27,7 @@ -def derive_importpath(import_path): +def derive_importpath(import_path, raising): import pytest if not isinstance(import_path, _basestring) or "." not in import_path: raise TypeError("must be absolute import path string, not %r" % @@ -51,7 +51,8 @@ attr = rest.pop() obj = getattr(obj, attr) attr = rest[0] - getattr(obj, attr) + if raising: + getattr(obj, attr) except AttributeError: __tracebackhide__ = True pytest.fail("object %r has no attribute %r" % (obj, attr)) @@ -95,7 +96,7 @@ "setattr(target, value) with target being a dotted " "import string") value = name - name, target = derive_importpath(target) + name, target = derive_importpath(target, raising) oldval = getattr(target, name, notset) if raising and oldval is notset: @@ -124,7 +125,7 @@ raise TypeError("use delattr(target, name) or " "delattr(target) with target being a dotted " "import string") - name, target = derive_importpath(target) + name, target = derive_importpath(target, raising) if not hasattr(target, name): if raising: diff -r 6ee4b4d70a9b81f525c6fe46eab1b2018de6f4b0 -r eea929322b9dd2d0a19892ac29e2b80fc38617c5 testing/test_monkeypatch.py --- a/testing/test_monkeypatch.py +++ b/testing/test_monkeypatch.py @@ -62,6 +62,11 @@ pytest.raises(pytest.fail.Exception, lambda: monkeypatch.setattr("os.path.qweqwe", None)) + def test_unknown_attr_non_raising(self, monkeypatch): + # https://bitbucket.org/pytest-dev/pytest/issue/746/ + monkeypatch.setattr('os.path.qweqwe', 42, raising=False) + assert os.path.qweqwe == 42 + def test_delattr(self, monkeypatch): monkeypatch.delattr("os.path.abspath") assert not hasattr(os.path, "abspath") https://bitbucket.org/pytest-dev/pytest/commits/87208a0f0714/ Changeset: 87208a0f0714 User: hpk42 Date: 2015-06-03 21:37:26+00:00 Summary: Merged in The-Compiler/pytest (pull request #298) Fix monkeypatch.setenv with string and raising=False. Affected #: 2 files diff -r 84621bcbb544f251f2aac602c07b68797e70d51d -r 87208a0f071435507183f93c87043d6076cb2661 _pytest/monkeypatch.py --- a/_pytest/monkeypatch.py +++ b/_pytest/monkeypatch.py @@ -27,7 +27,7 @@ -def derive_importpath(import_path): +def derive_importpath(import_path, raising): import pytest if not isinstance(import_path, _basestring) or "." not in import_path: raise TypeError("must be absolute import path string, not %r" % @@ -51,7 +51,8 @@ attr = rest.pop() obj = getattr(obj, attr) attr = rest[0] - getattr(obj, attr) + if raising: + getattr(obj, attr) except AttributeError: __tracebackhide__ = True pytest.fail("object %r has no attribute %r" % (obj, attr)) @@ -95,7 +96,7 @@ "setattr(target, value) with target being a dotted " "import string") value = name - name, target = derive_importpath(target) + name, target = derive_importpath(target, raising) oldval = getattr(target, name, notset) if raising and oldval is notset: @@ -124,7 +125,7 @@ raise TypeError("use delattr(target, name) or " "delattr(target) with target being a dotted " "import string") - name, target = derive_importpath(target) + name, target = derive_importpath(target, raising) if not hasattr(target, name): if raising: diff -r 84621bcbb544f251f2aac602c07b68797e70d51d -r 87208a0f071435507183f93c87043d6076cb2661 testing/test_monkeypatch.py --- a/testing/test_monkeypatch.py +++ b/testing/test_monkeypatch.py @@ -62,6 +62,11 @@ pytest.raises(pytest.fail.Exception, lambda: monkeypatch.setattr("os.path.qweqwe", None)) + def test_unknown_attr_non_raising(self, monkeypatch): + # https://bitbucket.org/pytest-dev/pytest/issue/746/ + monkeypatch.setattr('os.path.qweqwe', 42, raising=False) + assert os.path.qweqwe == 42 + def test_delattr(self, monkeypatch): monkeypatch.delattr("os.path.abspath") assert not hasattr(os.path, "abspath") Repository URL: https://bitbucket.org/pytest-dev/pytest/ -- This is a commit notification from bitbucket.org. You are receiving this because you have the service enabled, addressing the recipient of this email. From commits-noreply at bitbucket.org Wed Jun 3 23:37:31 2015 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Wed, 03 Jun 2015 21:37:31 -0000 Subject: [Pytest-commit] commit/pytest: hpk42: Merged in The-Compiler/pytest (pull request #298) Message-ID: <20150603213731.29895.53046@app14.ash-private.bitbucket.org> 1 new commit in pytest: https://bitbucket.org/pytest-dev/pytest/commits/87208a0f0714/ Changeset: 87208a0f0714 User: hpk42 Date: 2015-06-03 21:37:26+00:00 Summary: Merged in The-Compiler/pytest (pull request #298) Fix monkeypatch.setenv with string and raising=False. Affected #: 2 files diff -r 84621bcbb544f251f2aac602c07b68797e70d51d -r 87208a0f071435507183f93c87043d6076cb2661 _pytest/monkeypatch.py --- a/_pytest/monkeypatch.py +++ b/_pytest/monkeypatch.py @@ -27,7 +27,7 @@ -def derive_importpath(import_path): +def derive_importpath(import_path, raising): import pytest if not isinstance(import_path, _basestring) or "." not in import_path: raise TypeError("must be absolute import path string, not %r" % @@ -51,7 +51,8 @@ attr = rest.pop() obj = getattr(obj, attr) attr = rest[0] - getattr(obj, attr) + if raising: + getattr(obj, attr) except AttributeError: __tracebackhide__ = True pytest.fail("object %r has no attribute %r" % (obj, attr)) @@ -95,7 +96,7 @@ "setattr(target, value) with target being a dotted " "import string") value = name - name, target = derive_importpath(target) + name, target = derive_importpath(target, raising) oldval = getattr(target, name, notset) if raising and oldval is notset: @@ -124,7 +125,7 @@ raise TypeError("use delattr(target, name) or " "delattr(target) with target being a dotted " "import string") - name, target = derive_importpath(target) + name, target = derive_importpath(target, raising) if not hasattr(target, name): if raising: diff -r 84621bcbb544f251f2aac602c07b68797e70d51d -r 87208a0f071435507183f93c87043d6076cb2661 testing/test_monkeypatch.py --- a/testing/test_monkeypatch.py +++ b/testing/test_monkeypatch.py @@ -62,6 +62,11 @@ pytest.raises(pytest.fail.Exception, lambda: monkeypatch.setattr("os.path.qweqwe", None)) + def test_unknown_attr_non_raising(self, monkeypatch): + # https://bitbucket.org/pytest-dev/pytest/issue/746/ + monkeypatch.setattr('os.path.qweqwe', 42, raising=False) + assert os.path.qweqwe == 42 + def test_delattr(self, monkeypatch): monkeypatch.delattr("os.path.abspath") assert not hasattr(os.path, "abspath") Repository URL: https://bitbucket.org/pytest-dev/pytest/ -- This is a commit notification from bitbucket.org. You are receiving this because you have the service enabled, addressing the recipient of this email. From commits-noreply at bitbucket.org Wed Jun 3 23:39:15 2015 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Wed, 03 Jun 2015 21:39:15 -0000 Subject: [Pytest-commit] commit/pytest: hpk42: add changelog for latest merge Message-ID: <20150603213915.26251.55193@app11.ash-private.bitbucket.org> 1 new commit in pytest: https://bitbucket.org/pytest-dev/pytest/commits/afa7c591b96a/ Changeset: afa7c591b96a User: hpk42 Date: 2015-06-03 21:39:00+00:00 Summary: add changelog for latest merge Affected #: 2 files diff -r 87208a0f071435507183f93c87043d6076cb2661 -r afa7c591b96a6bd090e78c3e5897f9af6900d4a0 AUTHORS --- a/AUTHORS +++ b/AUTHORS @@ -49,3 +49,4 @@ Dave Hunt Charles Cloud Eduardo Schettino +Florian Bruhin diff -r 87208a0f071435507183f93c87043d6076cb2661 -r afa7c591b96a6bd090e78c3e5897f9af6900d4a0 CHANGELOG --- a/CHANGELOG +++ b/CHANGELOG @@ -51,6 +51,9 @@ the test. In all cases you get back a RunResult but the inprocess one will also have a "reprec" attribute with the recorded events/reports. +- fix monkeypatch.setattr("x.y", raising=False) to actually not raise + if "y" is not a pre-existing attribute. Thanks Florian Bruhin. + 2.7.1 (compared to 2.7.0) ----------------------------- Repository URL: https://bitbucket.org/pytest-dev/pytest/ -- This is a commit notification from bitbucket.org. You are receiving this because you have the service enabled, addressing the recipient of this email. From commits-noreply at bitbucket.org Wed Jun 3 23:40:13 2015 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Wed, 03 Jun 2015 21:40:13 -0000 Subject: [Pytest-commit] commit/pytest: hpk42: Merged in nicoddemus/pytest/issue-741-pytester-output (pull request #293) Message-ID: <20150603214013.17712.91790@app05.ash-private.bitbucket.org> 1 new commit in pytest: https://bitbucket.org/pytest-dev/pytest/commits/8eda4211c0dd/ Changeset: 8eda4211c0dd User: hpk42 Date: 2015-06-03 21:40:08+00:00 Summary: Merged in nicoddemus/pytest/issue-741-pytester-output (pull request #293) Make "running" output from testdir.run copy/pastable Affected #: 1 file diff -r afa7c591b96a6bd090e78c3e5897f9af6900d4a0 -r 8eda4211c0ddcfd7e8ae316324647b5ba7342dbe _pytest/pytester.py --- a/_pytest/pytester.py +++ b/_pytest/pytester.py @@ -871,7 +871,8 @@ cmdargs = [str(x) for x in cmdargs] p1 = self.tmpdir.join("stdout") p2 = self.tmpdir.join("stderr") - print_("running", cmdargs, "curdir=", py.path.local()) + print_("running:", ' '.join(cmdargs)) + print_(" in:", str(py.path.local())) f1 = codecs.open(str(p1), "w", encoding="utf8") f2 = codecs.open(str(p2), "w", encoding="utf8") try: Repository URL: https://bitbucket.org/pytest-dev/pytest/ -- This is a commit notification from bitbucket.org. You are receiving this because you have the service enabled, addressing the recipient of this email. From commits-noreply at bitbucket.org Wed Jun 3 23:40:13 2015 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Wed, 03 Jun 2015 21:40:13 -0000 Subject: [Pytest-commit] commit/pytest: 2 new changesets Message-ID: <20150603214013.7957.22181@app06.ash-private.bitbucket.org> 2 new commits in pytest: https://bitbucket.org/pytest-dev/pytest/commits/79a2a1095bcc/ Changeset: 79a2a1095bcc Branch: issue-741-pytester-output User: nicoddemus Date: 2015-05-09 13:24:33+00:00 Summary: Make "running" output from testdir.run copy/pastable fix 741 Affected #: 1 file diff -r 162f18fafaba4bc42fa61b9e4fbdf3d7b645748e -r 79a2a1095bcc2e88b5f37ef7eaf15ee5b9cd8298 _pytest/pytester.py --- a/_pytest/pytester.py +++ b/_pytest/pytester.py @@ -871,7 +871,8 @@ cmdargs = [str(x) for x in cmdargs] p1 = self.tmpdir.join("stdout") p2 = self.tmpdir.join("stderr") - print_("running", cmdargs, "curdir=", py.path.local()) + print_("running:", ' '.join(cmdargs)) + print_(" in:", str(py.path.local())) f1 = codecs.open(str(p1), "w", encoding="utf8") f2 = codecs.open(str(p2), "w", encoding="utf8") try: https://bitbucket.org/pytest-dev/pytest/commits/8eda4211c0dd/ Changeset: 8eda4211c0dd User: hpk42 Date: 2015-06-03 21:40:08+00:00 Summary: Merged in nicoddemus/pytest/issue-741-pytester-output (pull request #293) Make "running" output from testdir.run copy/pastable Affected #: 1 file diff -r afa7c591b96a6bd090e78c3e5897f9af6900d4a0 -r 8eda4211c0ddcfd7e8ae316324647b5ba7342dbe _pytest/pytester.py --- a/_pytest/pytester.py +++ b/_pytest/pytester.py @@ -871,7 +871,8 @@ cmdargs = [str(x) for x in cmdargs] p1 = self.tmpdir.join("stdout") p2 = self.tmpdir.join("stderr") - print_("running", cmdargs, "curdir=", py.path.local()) + print_("running:", ' '.join(cmdargs)) + print_(" in:", str(py.path.local())) f1 = codecs.open(str(p1), "w", encoding="utf8") f2 = codecs.open(str(p2), "w", encoding="utf8") try: Repository URL: https://bitbucket.org/pytest-dev/pytest/ -- This is a commit notification from bitbucket.org. You are receiving this because you have the service enabled, addressing the recipient of this email. From commits-noreply at bitbucket.org Wed Jun 3 23:41:13 2015 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Wed, 03 Jun 2015 21:41:13 -0000 Subject: [Pytest-commit] commit/pytest: hpk42: fix issue741: make running output from testdir.run copy/pasteable Message-ID: <20150603214113.21849.41297@app07.ash-private.bitbucket.org> 1 new commit in pytest: https://bitbucket.org/pytest-dev/pytest/commits/362995ba2d38/ Changeset: 362995ba2d38 User: hpk42 Date: 2015-06-03 21:40:42+00:00 Summary: fix issue741: make running output from testdir.run copy/pasteable Affected #: 1 file diff -r 8eda4211c0ddcfd7e8ae316324647b5ba7342dbe -r 362995ba2d384c6e06d57be01c670980db5bd592 CHANGELOG --- a/CHANGELOG +++ b/CHANGELOG @@ -54,6 +54,9 @@ - fix monkeypatch.setattr("x.y", raising=False) to actually not raise if "y" is not a pre-existing attribute. Thanks Florian Bruhin. +- fix issue741: make running output from testdir.run copy/pasteable + Thanks Bruno Oliveira. + 2.7.1 (compared to 2.7.0) ----------------------------- Repository URL: https://bitbucket.org/pytest-dev/pytest/ -- This is a commit notification from bitbucket.org. You are receiving this because you have the service enabled, addressing the recipient of this email. From commits-noreply at bitbucket.org Wed Jun 3 23:43:27 2015 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Wed, 03 Jun 2015 21:43:27 -0000 Subject: [Pytest-commit] commit/pytest: 2 new changesets Message-ID: <20150603214327.21035.71648@app12.ash-private.bitbucket.org> 2 new commits in pytest: https://bitbucket.org/pytest-dev/pytest/commits/7227c7098c9d/ Changeset: 7227c7098c9d Branch: pytest-2.7 User: hpk42 Date: 2015-06-03 21:42:38+00:00 Summary: fix typo Affected #: 1 file diff -r 2b8aadc60665e3605eabd28e00c201b8c15d6ebf -r 7227c7098c9d4dee502cf1f53818c02c153ef587 doc/en/example/simple.txt --- a/doc/en/example/simple.txt +++ b/doc/en/example/simple.txt @@ -10,7 +10,7 @@ .. regendoc:wipe Suppose we want to write a test that depends on a command line option. -Here is a basic pattern how to achieve this:: +Here is a basic pattern to achieve this:: # content of test_sample.py def test_answer(cmdopt): @@ -41,9 +41,9 @@ F ================================= FAILURES ================================= _______________________________ test_answer ________________________________ - + cmdopt = 'type1' - + def test_answer(cmdopt): if cmdopt == "type1": print ("first") @@ -51,7 +51,7 @@ print ("second") > assert 0 # to see what was printed E assert 0 - + test_sample.py:6: AssertionError --------------------------- Captured stdout call --------------------------- first @@ -109,9 +109,9 @@ $ py.test =========================== test session starts ============================ platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-162, inifile: + rootdir: /tmp/doc-exec-162, inifile: collected 0 items - + ============================= in 0.00 seconds ============================= .. _`excontrolskip`: @@ -154,13 +154,13 @@ $ py.test -rs # "-rs" means report details on the little 's' =========================== test session starts ============================ platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-162, inifile: + rootdir: /tmp/doc-exec-162, inifile: collected 2 items - + test_module.py .s ========================= short test summary info ========================== SKIP [1] /tmp/doc-exec-162/conftest.py:9: need --runslow option to run - + =================== 1 passed, 1 skipped in 0.01 seconds ==================== Or run it including the ``slow`` marked test:: @@ -168,11 +168,11 @@ $ py.test --runslow =========================== test session starts ============================ platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-162, inifile: + rootdir: /tmp/doc-exec-162, inifile: collected 2 items - + test_module.py .. - + ========================= 2 passed in 0.01 seconds ========================= Writing well integrated assertion helpers @@ -205,11 +205,11 @@ F ================================= FAILURES ================================= ______________________________ test_something ______________________________ - + def test_something(): > checkconfig(42) E Failed: not configured: 42 - + test_checkconfig.py:8: Failed 1 failed in 0.02 seconds @@ -260,10 +260,10 @@ $ py.test =========================== test session starts ============================ platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-162, inifile: + rootdir: /tmp/doc-exec-162, inifile: project deps: mylib-1.1 collected 0 items - + ============================= in 0.00 seconds ============================= .. regendoc:wipe @@ -284,11 +284,11 @@ $ py.test -v =========================== test session starts ============================ platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 -- /tmp/sandbox/pytest/.tox/regen/bin/python3.4 - rootdir: /tmp/doc-exec-162, inifile: + rootdir: /tmp/doc-exec-162, inifile: info1: did you know that ... did you? collecting ... collected 0 items - + ============================= in 0.00 seconds ============================= and nothing when run plainly:: @@ -296,9 +296,9 @@ $ py.test =========================== test session starts ============================ platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-162, inifile: + rootdir: /tmp/doc-exec-162, inifile: collected 0 items - + ============================= in 0.00 seconds ============================= profiling test duration @@ -329,11 +329,11 @@ $ py.test --durations=3 =========================== test session starts ============================ platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-162, inifile: + rootdir: /tmp/doc-exec-162, inifile: collected 3 items - + test_some_are_slow.py ... - + ========================= slowest 3 test durations ========================= 0.20s call test_some_are_slow.py::test_funcslow2 0.10s call test_some_are_slow.py::test_funcslow1 @@ -391,20 +391,20 @@ $ py.test -rx =========================== test session starts ============================ platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-162, inifile: + rootdir: /tmp/doc-exec-162, inifile: collected 4 items - + test_step.py .Fx. - + ================================= FAILURES ================================= ____________________ TestUserHandling.test_modification ____________________ - + self = - + def test_modification(self): > assert 0 E assert 0 - + test_step.py:9: AssertionError ========================= short test summary info ========================== XFAIL test_step.py::TestUserHandling::()::test_deletion @@ -462,14 +462,14 @@ $ py.test =========================== test session starts ============================ platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-162, inifile: + rootdir: /tmp/doc-exec-162, inifile: collected 7 items - + test_step.py .Fx. a/test_db.py F a/test_db2.py F b/test_error.py E - + ================================== ERRORS ================================== _______________________ ERROR at setup of test_root ________________________ file /tmp/doc-exec-162/b/test_error.py, line 1 @@ -477,37 +477,37 @@ fixture 'db' not found available fixtures: pytestconfig, capsys, recwarn, monkeypatch, tmpdir, capfd use 'py.test --fixtures [testpath]' for help on them. - + /tmp/doc-exec-162/b/test_error.py:1 ================================= FAILURES ================================= ____________________ TestUserHandling.test_modification ____________________ - + self = - + def test_modification(self): > assert 0 E assert 0 - + test_step.py:9: AssertionError _________________________________ test_a1 __________________________________ - + db = - + def test_a1(db): > assert 0, db # to show value E AssertionError: E assert 0 - + a/test_db.py:2: AssertionError _________________________________ test_a2 __________________________________ - + db = - + def test_a2(db): > assert 0, db # to show value E AssertionError: E assert 0 - + a/test_db2.py:2: AssertionError ========== 3 failed, 2 passed, 1 xfailed, 1 error in 0.05 seconds ========== @@ -565,27 +565,27 @@ $ py.test test_module.py =========================== test session starts ============================ platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-162, inifile: + rootdir: /tmp/doc-exec-162, inifile: collected 2 items - + test_module.py FF - + ================================= FAILURES ================================= ________________________________ test_fail1 ________________________________ - + tmpdir = local('/tmp/pytest-22/test_fail10') - + def test_fail1(tmpdir): > assert 0 E assert 0 - + test_module.py:2: AssertionError ________________________________ test_fail2 ________________________________ - + def test_fail2(): > assert 0 E assert 0 - + test_module.py:4: AssertionError ========================= 2 failed in 0.02 seconds ========================= @@ -656,38 +656,38 @@ $ py.test -s test_module.py =========================== test session starts ============================ platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-162, inifile: + rootdir: /tmp/doc-exec-162, inifile: collected 3 items - + test_module.py Esetting up a test failed! test_module.py::test_setup_fails Fexecuting test failed test_module.py::test_call_fails F - + ================================== ERRORS ================================== ____________________ ERROR at setup of test_setup_fails ____________________ - + @pytest.fixture def other(): > assert 0 E assert 0 - + test_module.py:6: AssertionError ================================= FAILURES ================================= _____________________________ test_call_fails ______________________________ - + something = None - + def test_call_fails(something): > assert 0 E assert 0 - + test_module.py:12: AssertionError ________________________________ test_fail2 ________________________________ - + def test_fail2(): > assert 0 E assert 0 - + test_module.py:15: AssertionError ==================== 2 failed, 1 error in 0.02 seconds ===================== https://bitbucket.org/pytest-dev/pytest/commits/44eb61fafecb/ Changeset: 44eb61fafecb User: hpk42 Date: 2015-06-03 21:43:12+00:00 Summary: port typo fix Affected #: 1 file diff -r 362995ba2d384c6e06d57be01c670980db5bd592 -r 44eb61fafecb5990112ac91b58d2496ed81a83b0 doc/en/example/simple.txt --- a/doc/en/example/simple.txt +++ b/doc/en/example/simple.txt @@ -10,7 +10,7 @@ .. regendoc:wipe Suppose we want to write a test that depends on a command line option. -Here is a basic pattern how to achieve this:: +Here is a basic pattern to achieve this:: # content of test_sample.py def test_answer(cmdopt): @@ -41,9 +41,9 @@ F ================================= FAILURES ================================= _______________________________ test_answer ________________________________ - + cmdopt = 'type1' - + def test_answer(cmdopt): if cmdopt == "type1": print ("first") @@ -51,7 +51,7 @@ print ("second") > assert 0 # to see what was printed E assert 0 - + test_sample.py:6: AssertionError --------------------------- Captured stdout call --------------------------- first @@ -109,9 +109,9 @@ $ py.test =========================== test session starts ============================ platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-162, inifile: + rootdir: /tmp/doc-exec-162, inifile: collected 0 items - + ============================= in 0.00 seconds ============================= .. _`excontrolskip`: @@ -154,13 +154,13 @@ $ py.test -rs # "-rs" means report details on the little 's' =========================== test session starts ============================ platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-162, inifile: + rootdir: /tmp/doc-exec-162, inifile: collected 2 items - + test_module.py .s ========================= short test summary info ========================== SKIP [1] /tmp/doc-exec-162/conftest.py:9: need --runslow option to run - + =================== 1 passed, 1 skipped in 0.01 seconds ==================== Or run it including the ``slow`` marked test:: @@ -168,11 +168,11 @@ $ py.test --runslow =========================== test session starts ============================ platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-162, inifile: + rootdir: /tmp/doc-exec-162, inifile: collected 2 items - + test_module.py .. - + ========================= 2 passed in 0.01 seconds ========================= Writing well integrated assertion helpers @@ -205,11 +205,11 @@ F ================================= FAILURES ================================= ______________________________ test_something ______________________________ - + def test_something(): > checkconfig(42) E Failed: not configured: 42 - + test_checkconfig.py:8: Failed 1 failed in 0.02 seconds @@ -260,10 +260,10 @@ $ py.test =========================== test session starts ============================ platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-162, inifile: + rootdir: /tmp/doc-exec-162, inifile: project deps: mylib-1.1 collected 0 items - + ============================= in 0.00 seconds ============================= .. regendoc:wipe @@ -284,11 +284,11 @@ $ py.test -v =========================== test session starts ============================ platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 -- /tmp/sandbox/pytest/.tox/regen/bin/python3.4 - rootdir: /tmp/doc-exec-162, inifile: + rootdir: /tmp/doc-exec-162, inifile: info1: did you know that ... did you? collecting ... collected 0 items - + ============================= in 0.00 seconds ============================= and nothing when run plainly:: @@ -296,9 +296,9 @@ $ py.test =========================== test session starts ============================ platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-162, inifile: + rootdir: /tmp/doc-exec-162, inifile: collected 0 items - + ============================= in 0.00 seconds ============================= profiling test duration @@ -329,11 +329,11 @@ $ py.test --durations=3 =========================== test session starts ============================ platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-162, inifile: + rootdir: /tmp/doc-exec-162, inifile: collected 3 items - + test_some_are_slow.py ... - + ========================= slowest 3 test durations ========================= 0.20s call test_some_are_slow.py::test_funcslow2 0.10s call test_some_are_slow.py::test_funcslow1 @@ -391,20 +391,20 @@ $ py.test -rx =========================== test session starts ============================ platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-162, inifile: + rootdir: /tmp/doc-exec-162, inifile: collected 4 items - + test_step.py .Fx. - + ================================= FAILURES ================================= ____________________ TestUserHandling.test_modification ____________________ - + self = - + def test_modification(self): > assert 0 E assert 0 - + test_step.py:9: AssertionError ========================= short test summary info ========================== XFAIL test_step.py::TestUserHandling::()::test_deletion @@ -462,14 +462,14 @@ $ py.test =========================== test session starts ============================ platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-162, inifile: + rootdir: /tmp/doc-exec-162, inifile: collected 7 items - + test_step.py .Fx. a/test_db.py F a/test_db2.py F b/test_error.py E - + ================================== ERRORS ================================== _______________________ ERROR at setup of test_root ________________________ file /tmp/doc-exec-162/b/test_error.py, line 1 @@ -477,37 +477,37 @@ fixture 'db' not found available fixtures: pytestconfig, capsys, recwarn, monkeypatch, tmpdir, capfd use 'py.test --fixtures [testpath]' for help on them. - + /tmp/doc-exec-162/b/test_error.py:1 ================================= FAILURES ================================= ____________________ TestUserHandling.test_modification ____________________ - + self = - + def test_modification(self): > assert 0 E assert 0 - + test_step.py:9: AssertionError _________________________________ test_a1 __________________________________ - + db = - + def test_a1(db): > assert 0, db # to show value E AssertionError: E assert 0 - + a/test_db.py:2: AssertionError _________________________________ test_a2 __________________________________ - + db = - + def test_a2(db): > assert 0, db # to show value E AssertionError: E assert 0 - + a/test_db2.py:2: AssertionError ========== 3 failed, 2 passed, 1 xfailed, 1 error in 0.05 seconds ========== @@ -565,27 +565,27 @@ $ py.test test_module.py =========================== test session starts ============================ platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-162, inifile: + rootdir: /tmp/doc-exec-162, inifile: collected 2 items - + test_module.py FF - + ================================= FAILURES ================================= ________________________________ test_fail1 ________________________________ - + tmpdir = local('/tmp/pytest-22/test_fail10') - + def test_fail1(tmpdir): > assert 0 E assert 0 - + test_module.py:2: AssertionError ________________________________ test_fail2 ________________________________ - + def test_fail2(): > assert 0 E assert 0 - + test_module.py:4: AssertionError ========================= 2 failed in 0.02 seconds ========================= @@ -656,38 +656,38 @@ $ py.test -s test_module.py =========================== test session starts ============================ platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-162, inifile: + rootdir: /tmp/doc-exec-162, inifile: collected 3 items - + test_module.py Esetting up a test failed! test_module.py::test_setup_fails Fexecuting test failed test_module.py::test_call_fails F - + ================================== ERRORS ================================== ____________________ ERROR at setup of test_setup_fails ____________________ - + @pytest.fixture def other(): > assert 0 E assert 0 - + test_module.py:6: AssertionError ================================= FAILURES ================================= _____________________________ test_call_fails ______________________________ - + something = None - + def test_call_fails(something): > assert 0 E assert 0 - + test_module.py:12: AssertionError ________________________________ test_fail2 ________________________________ - + def test_fail2(): > assert 0 E assert 0 - + test_module.py:15: AssertionError ==================== 2 failed, 1 error in 0.02 seconds ===================== Repository URL: https://bitbucket.org/pytest-dev/pytest/ -- This is a commit notification from bitbucket.org. You are receiving this because you have the service enabled, addressing the recipient of this email. From builds at drone.io Wed Jun 3 23:55:12 2015 From: builds at drone.io (Drone.io Build) Date: Wed, 03 Jun 2015 21:55:12 +0000 Subject: [Pytest-commit] [FAIL] pytest - # 127 Message-ID: <20150603215512.47453.27714@drone.io> Build Failed Build : https://drone.io/bitbucket.org/pytest-dev/pytest/127 Project : https://drone.io/bitbucket.org/pytest-dev/pytest Repository : https://bitbucket.org/pytest-dev/pytest Version : 3987:7227c7098c9d Author : holger krekel Branch : pytest-2.7 Message: fix typo -------------- next part -------------- An HTML attachment was scrubbed... URL: From commits-noreply at bitbucket.org Thu Jun 4 01:09:09 2015 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Wed, 03 Jun 2015 23:09:09 -0000 Subject: [Pytest-commit] commit/pytest: gutworth: use NameConstant node when it exists (fixes #735) Message-ID: <20150603230909.8902.24629@app08.ash-private.bitbucket.org> 1 new commit in pytest: https://bitbucket.org/pytest-dev/pytest/commits/ce3f55c10452/ Changeset: ce3f55c10452 Branch: issue735 User: gutworth Date: 2015-06-03 23:07:10+00:00 Summary: use NameConstant node when it exists (fixes #735) Affected #: 2 files diff -r 44eb61fafecb5990112ac91b58d2496ed81a83b0 -r ce3f55c1045264493df6bc4d5a00dc1904fcdf7e CHANGELOG --- a/CHANGELOG +++ b/CHANGELOG @@ -1,6 +1,8 @@ 2.8.0.dev (compared to 2.7.X) ----------------------------- +- fix issue735: assertion failures on debug versions of Python 3.4+ + - change test module importing behaviour to append to sys.path instead of prepending. This better allows to run test modules against installated versions of a package even if the package diff -r 44eb61fafecb5990112ac91b58d2496ed81a83b0 -r ce3f55c1045264493df6bc4d5a00dc1904fcdf7e _pytest/assertion/rewrite.py --- a/_pytest/assertion/rewrite.py +++ b/_pytest/assertion/rewrite.py @@ -442,6 +442,13 @@ ast.NotIn: "not in" } +# Python 3.4+ compatibility +if hasattr(ast, "NameConstant"): + _NameConstant = ast.NameConstant +else: + def _NameConstant(c): + return ast.Name(str(c), ast.Load()) + def set_location(node, lineno, col_offset): """Set node location information recursively.""" @@ -680,7 +687,7 @@ if self.variables: variables = [ast.Name(name, ast.Store()) for name in self.variables] - clear = ast.Assign(variables, ast.Name("None", ast.Load())) + clear = ast.Assign(variables, _NameConstant(None)) self.statements.append(clear) # Fix line numbers. for stmt in self.statements: Repository URL: https://bitbucket.org/pytest-dev/pytest/ -- This is a commit notification from bitbucket.org. You are receiving this because you have the service enabled, addressing the recipient of this email. From commits-noreply at bitbucket.org Thu Jun 4 07:48:58 2015 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Thu, 04 Jun 2015 05:48:58 -0000 Subject: [Pytest-commit] commit/pytest: hpk42: Close branch issue735 Message-ID: <20150604054858.20715.33347@app02.ash-private.bitbucket.org> 1 new commit in pytest: https://bitbucket.org/pytest-dev/pytest/commits/5ac0ada92e1a/ Changeset: 5ac0ada92e1a Branch: issue735 User: hpk42 Date: 2015-06-04 05:48:53+00:00 Summary: Close branch issue735 Affected #: 0 files Repository URL: https://bitbucket.org/pytest-dev/pytest/ -- This is a commit notification from bitbucket.org. You are receiving this because you have the service enabled, addressing the recipient of this email. From commits-noreply at bitbucket.org Thu Jun 4 07:48:58 2015 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Thu, 04 Jun 2015 05:48:58 -0000 Subject: [Pytest-commit] commit/pytest: hpk42: Merged in issue735 (pull request #302) Message-ID: <20150604054858.25528.64918@app12.ash-private.bitbucket.org> 1 new commit in pytest: https://bitbucket.org/pytest-dev/pytest/commits/61d638f89acb/ Changeset: 61d638f89acb User: hpk42 Date: 2015-06-04 05:48:53+00:00 Summary: Merged in issue735 (pull request #302) use NameConstant node when it exists (fixes #735) Affected #: 2 files diff -r 44eb61fafecb5990112ac91b58d2496ed81a83b0 -r 61d638f89acbda40a49ccadcb4aebe06783200df CHANGELOG --- a/CHANGELOG +++ b/CHANGELOG @@ -1,6 +1,8 @@ 2.8.0.dev (compared to 2.7.X) ----------------------------- +- fix issue735: assertion failures on debug versions of Python 3.4+ + - change test module importing behaviour to append to sys.path instead of prepending. This better allows to run test modules against installated versions of a package even if the package diff -r 44eb61fafecb5990112ac91b58d2496ed81a83b0 -r 61d638f89acbda40a49ccadcb4aebe06783200df _pytest/assertion/rewrite.py --- a/_pytest/assertion/rewrite.py +++ b/_pytest/assertion/rewrite.py @@ -442,6 +442,13 @@ ast.NotIn: "not in" } +# Python 3.4+ compatibility +if hasattr(ast, "NameConstant"): + _NameConstant = ast.NameConstant +else: + def _NameConstant(c): + return ast.Name(str(c), ast.Load()) + def set_location(node, lineno, col_offset): """Set node location information recursively.""" @@ -680,7 +687,7 @@ if self.variables: variables = [ast.Name(name, ast.Store()) for name in self.variables] - clear = ast.Assign(variables, ast.Name("None", ast.Load())) + clear = ast.Assign(variables, _NameConstant(None)) self.statements.append(clear) # Fix line numbers. for stmt in self.statements: Repository URL: https://bitbucket.org/pytest-dev/pytest/ -- This is a commit notification from bitbucket.org. You are receiving this because you have the service enabled, addressing the recipient of this email. From commits-noreply at bitbucket.org Thu Jun 4 07:52:47 2015 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Thu, 04 Jun 2015 05:52:47 -0000 Subject: [Pytest-commit] commit/pytest: hpk42: backport fixed issue735 Message-ID: <20150604055247.15979.62688@app13.ash-private.bitbucket.org> 1 new commit in pytest: https://bitbucket.org/pytest-dev/pytest/commits/5fdb1399c25f/ Changeset: 5fdb1399c25f Branch: pytest-2.7 User: hpk42 Date: 2015-06-04 05:52:25+00:00 Summary: backport fixed issue735 Affected #: 2 files diff -r 7227c7098c9d4dee502cf1f53818c02c153ef587 -r 5fdb1399c25fcd6c4ed7d663b026f5fc8b9c8c04 CHANGELOG --- a/CHANGELOG +++ b/CHANGELOG @@ -1,3 +1,9 @@ +2.7.2 (compared to 2.7.1) +----------------------------- + +- fix issue735: assertion failures on debug versions of Python 3.4+ + Thanks Benjamin Peterson. + 2.7.1 (compared to 2.7.0) ----------------------------- diff -r 7227c7098c9d4dee502cf1f53818c02c153ef587 -r 5fdb1399c25fcd6c4ed7d663b026f5fc8b9c8c04 _pytest/assertion/rewrite.py --- a/_pytest/assertion/rewrite.py +++ b/_pytest/assertion/rewrite.py @@ -442,6 +442,13 @@ ast.NotIn: "not in" } +# Python 3.4+ compatibility +if hasattr(ast, "NameConstant"): + _NameConstant = ast.NameConstant +else: + def _NameConstant(c): + return ast.Name(str(c), ast.Load()) + def set_location(node, lineno, col_offset): """Set node location information recursively.""" @@ -680,7 +687,7 @@ if self.variables: variables = [ast.Name(name, ast.Store()) for name in self.variables] - clear = ast.Assign(variables, ast.Name("None", ast.Load())) + clear = ast.Assign(variables, _NameConstant(None)) self.statements.append(clear) # Fix line numbers. for stmt in self.statements: Repository URL: https://bitbucket.org/pytest-dev/pytest/ -- This is a commit notification from bitbucket.org. You are receiving this because you have the service enabled, addressing the recipient of this email. From builds at drone.io Thu Jun 4 08:11:08 2015 From: builds at drone.io (Drone.io Build) Date: Thu, 04 Jun 2015 06:11:08 +0000 Subject: [Pytest-commit] [FAIL] pytest - # 132 Message-ID: <20150604061107.11557.34366@drone.io> Build Failed Build : https://drone.io/bitbucket.org/pytest-dev/pytest/132 Project : https://drone.io/bitbucket.org/pytest-dev/pytest Repository : https://bitbucket.org/pytest-dev/pytest Version : 3988:5fdb1399c25f Author : holger krekel Branch : pytest-2.7 Message: backport fixed issue735 -------------- next part -------------- An HTML attachment was scrubbed... URL: From commits-noreply at bitbucket.org Sat Jun 6 11:50:43 2015 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Sat, 06 Jun 2015 09:50:43 -0000 Subject: [Pytest-commit] commit/pytest: 2 new changesets Message-ID: <20150606095043.31947.47704@app06.ash-private.bitbucket.org> 2 new commits in pytest: https://bitbucket.org/pytest-dev/pytest/commits/c662aaf3be4d/ Changeset: c662aaf3be4d Branch: include-setup-teardown-duration-in-junitxml User: jpvanhal Date: 2015-05-01 11:55:52+00:00 Summary: Include setup and teardown in junitxml test durations Affected #: 2 files diff -r ce106625d6b53f0ff6e9dce8a1cf0c761341b2f4 -r c662aaf3be4d5d0ff6701603076641e96b7b70dc _pytest/junitxml.py --- a/_pytest/junitxml.py +++ b/_pytest/junitxml.py @@ -96,7 +96,7 @@ self.tests.append(Junit.testcase( classname=".".join(classnames), name=bin_xml_escape(names[-1]), - time=getattr(report, 'duration', 0) + time=0 )) def _write_captured_output(self, report): @@ -168,18 +168,18 @@ self._write_captured_output(report) def pytest_runtest_logreport(self, report): + if report.when == "setup": + self._opentestcase(report) + self.tests[-1].attr.time += getattr(report, 'duration', 0) if report.passed: if report.when == "call": # ignore setup/teardown - self._opentestcase(report) self.append_pass(report) elif report.failed: - self._opentestcase(report) if report.when != "call": self.append_error(report) else: self.append_failure(report) elif report.skipped: - self._opentestcase(report) self.append_skipped(report) def pytest_collectreport(self, report): diff -r ce106625d6b53f0ff6e9dce8a1cf0c761341b2f4 -r c662aaf3be4d5d0ff6701603076641e96b7b70dc testing/test_junitxml.py --- a/testing/test_junitxml.py +++ b/testing/test_junitxml.py @@ -44,6 +44,10 @@ def test_timing_function(self, testdir): testdir.makepyfile(""" import time, pytest + def setup_module(): + time.sleep(0.01) + def teardown_module(): + time.sleep(0.01) def test_sleep(): time.sleep(0.01) """) @@ -51,7 +55,7 @@ node = dom.getElementsByTagName("testsuite")[0] tnode = node.getElementsByTagName("testcase")[0] val = tnode.getAttributeNode("time").value - assert float(val) >= 0.001 + assert float(val) >= 0.03 def test_setup_error(self, testdir): testdir.makepyfile(""" https://bitbucket.org/pytest-dev/pytest/commits/7bad072348c4/ Changeset: 7bad072348c4 User: RonnyPfannschmidt Date: 2015-06-06 09:50:37+00:00 Summary: Merged in jpvanhal/pytest/include-setup-teardown-duration-in-junitxml (pull request #287) Include setup and teardown in junitxml test durations Affected #: 2 files diff -r 61d638f89acbda40a49ccadcb4aebe06783200df -r 7bad072348c45897c52d31eb2765a97e11721eba _pytest/junitxml.py --- a/_pytest/junitxml.py +++ b/_pytest/junitxml.py @@ -96,7 +96,7 @@ self.tests.append(Junit.testcase( classname=".".join(classnames), name=bin_xml_escape(names[-1]), - time=getattr(report, 'duration', 0) + time=0 )) def _write_captured_output(self, report): @@ -168,18 +168,18 @@ self._write_captured_output(report) def pytest_runtest_logreport(self, report): + if report.when == "setup": + self._opentestcase(report) + self.tests[-1].attr.time += getattr(report, 'duration', 0) if report.passed: if report.when == "call": # ignore setup/teardown - self._opentestcase(report) self.append_pass(report) elif report.failed: - self._opentestcase(report) if report.when != "call": self.append_error(report) else: self.append_failure(report) elif report.skipped: - self._opentestcase(report) self.append_skipped(report) def pytest_collectreport(self, report): diff -r 61d638f89acbda40a49ccadcb4aebe06783200df -r 7bad072348c45897c52d31eb2765a97e11721eba testing/test_junitxml.py --- a/testing/test_junitxml.py +++ b/testing/test_junitxml.py @@ -44,6 +44,10 @@ def test_timing_function(self, testdir): testdir.makepyfile(""" import time, pytest + def setup_module(): + time.sleep(0.01) + def teardown_module(): + time.sleep(0.01) def test_sleep(): time.sleep(0.01) """) @@ -51,7 +55,7 @@ node = dom.getElementsByTagName("testsuite")[0] tnode = node.getElementsByTagName("testcase")[0] val = tnode.getAttributeNode("time").value - assert float(val) >= 0.001 + assert float(val) >= 0.03 def test_setup_error(self, testdir): testdir.makepyfile(""" Repository URL: https://bitbucket.org/pytest-dev/pytest/ -- This is a commit notification from bitbucket.org. You are receiving this because you have the service enabled, addressing the recipient of this email. From commits-noreply at bitbucket.org Sat Jun 6 11:50:44 2015 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Sat, 06 Jun 2015 09:50:44 -0000 Subject: [Pytest-commit] commit/pytest: RonnyPfannschmidt: Merged in jpvanhal/pytest/include-setup-teardown-duration-in-junitxml (pull request #287) Message-ID: <20150606095044.6601.73675@app08.ash-private.bitbucket.org> 1 new commit in pytest: https://bitbucket.org/pytest-dev/pytest/commits/7bad072348c4/ Changeset: 7bad072348c4 User: RonnyPfannschmidt Date: 2015-06-06 09:50:37+00:00 Summary: Merged in jpvanhal/pytest/include-setup-teardown-duration-in-junitxml (pull request #287) Include setup and teardown in junitxml test durations Affected #: 2 files diff -r 61d638f89acbda40a49ccadcb4aebe06783200df -r 7bad072348c45897c52d31eb2765a97e11721eba _pytest/junitxml.py --- a/_pytest/junitxml.py +++ b/_pytest/junitxml.py @@ -96,7 +96,7 @@ self.tests.append(Junit.testcase( classname=".".join(classnames), name=bin_xml_escape(names[-1]), - time=getattr(report, 'duration', 0) + time=0 )) def _write_captured_output(self, report): @@ -168,18 +168,18 @@ self._write_captured_output(report) def pytest_runtest_logreport(self, report): + if report.when == "setup": + self._opentestcase(report) + self.tests[-1].attr.time += getattr(report, 'duration', 0) if report.passed: if report.when == "call": # ignore setup/teardown - self._opentestcase(report) self.append_pass(report) elif report.failed: - self._opentestcase(report) if report.when != "call": self.append_error(report) else: self.append_failure(report) elif report.skipped: - self._opentestcase(report) self.append_skipped(report) def pytest_collectreport(self, report): diff -r 61d638f89acbda40a49ccadcb4aebe06783200df -r 7bad072348c45897c52d31eb2765a97e11721eba testing/test_junitxml.py --- a/testing/test_junitxml.py +++ b/testing/test_junitxml.py @@ -44,6 +44,10 @@ def test_timing_function(self, testdir): testdir.makepyfile(""" import time, pytest + def setup_module(): + time.sleep(0.01) + def teardown_module(): + time.sleep(0.01) def test_sleep(): time.sleep(0.01) """) @@ -51,7 +55,7 @@ node = dom.getElementsByTagName("testsuite")[0] tnode = node.getElementsByTagName("testcase")[0] val = tnode.getAttributeNode("time").value - assert float(val) >= 0.001 + assert float(val) >= 0.03 def test_setup_error(self, testdir): testdir.makepyfile(""" Repository URL: https://bitbucket.org/pytest-dev/pytest/ -- This is a commit notification from bitbucket.org. You are receiving this because you have the service enabled, addressing the recipient of this email. From commits-noreply at bitbucket.org Sat Jun 6 11:53:51 2015 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Sat, 06 Jun 2015 09:53:51 -0000 Subject: [Pytest-commit] commit/pytest: RonnyPfannschmidt: add Janne`s changes to CHANGELOG Message-ID: <20150606095351.9642.49014@app14.ash-private.bitbucket.org> 1 new commit in pytest: https://bitbucket.org/pytest-dev/pytest/commits/cfef0237e937/ Changeset: cfef0237e937 User: RonnyPfannschmidt Date: 2015-06-06 09:53:34+00:00 Summary: add Janne`s changes to CHANGELOG Affected #: 1 file diff -r 7bad072348c45897c52d31eb2765a97e11721eba -r cfef0237e937713a80494fad9f570ffaf441b30b CHANGELOG --- a/CHANGELOG +++ b/CHANGELOG @@ -1,6 +1,9 @@ 2.8.0.dev (compared to 2.7.X) ----------------------------- +- Include setup and teardown in junitxml test durations. + Thanks Janne Vanhala. + - fix issue735: assertion failures on debug versions of Python 3.4+ - change test module importing behaviour to append to sys.path @@ -10,10 +13,10 @@ testing/__init__.py testing/test_pkg_under_test.py - pkg_under_test/ - - the tests will preferrably run against the installed version - of pkg_under_test whereas before they would always pick + pkg_under_test/ + + the tests will preferrably run against the installed version + of pkg_under_test whereas before they would always pick up the local version. Thanks Holger Krekel. - pytester: add method ``TmpTestdir.delete_loaded_modules()``, and call it @@ -21,19 +24,19 @@ Thanks Eduardo Schettino. - internally refactor pluginmanager API and code so that there - is a clear distinction between a pytest-agnostic rather simple + is a clear distinction between a pytest-agnostic rather simple pluginmanager and the PytestPluginManager which adds a lot of behaviour, among it handling of the local conftest files. In terms of documented methods this is a backward compatible - change but it might still break 3rd party plugins which relied on + change but it might still break 3rd party plugins which relied on details like especially the pluginmanager.add_shutdown() API. Thanks Holger Krekel. -- pluginmanagement: introduce ``pytest.hookimpl`` and - ``pytest.hookspec`` decorators for setting impl/spec - specific parameters. This substitutes the previous - now deprecated use of ``pytest.mark`` which is meant to - contain markers for test functions only. +- pluginmanagement: introduce ``pytest.hookimpl`` and + ``pytest.hookspec`` decorators for setting impl/spec + specific parameters. This substitutes the previous + now deprecated use of ``pytest.mark`` which is meant to + contain markers for test functions only. - write/refine docs for "writing plugins" which now have their own page and are separate from the "using/installing plugins`` page. @@ -41,8 +44,8 @@ - fix issue732: properly unregister plugins from any hook calling sites allowing to have temporary plugins during test execution. -- deprecate and warn about ``__multicall__`` argument in hook - implementations. Use the ``hookwrapper`` mechanism instead already +- deprecate and warn about ``__multicall__`` argument in hook + implementations. Use the ``hookwrapper`` mechanism instead already introduced with pytest-2.7. - speed up pytest's own test suite considerably by using inprocess @@ -59,7 +62,7 @@ - fix issue741: make running output from testdir.run copy/pasteable Thanks Bruno Oliveira. - + 2.7.1 (compared to 2.7.0) ----------------------------- Repository URL: https://bitbucket.org/pytest-dev/pytest/ -- This is a commit notification from bitbucket.org. You are receiving this because you have the service enabled, addressing the recipient of this email. From commits-noreply at bitbucket.org Sat Jun 6 13:38:44 2015 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Sat, 06 Jun 2015 11:38:44 -0000 Subject: [Pytest-commit] commit/pytest: RonnyPfannschmidt: sort AUTHORS and add Janne Message-ID: <20150606113844.24393.7286@app06.ash-private.bitbucket.org> 1 new commit in pytest: https://bitbucket.org/pytest-dev/pytest/commits/614bb548b536/ Changeset: 614bb548b536 User: RonnyPfannschmidt Date: 2015-06-06 11:38:25+00:00 Summary: sort AUTHORS and add Janne Affected #: 1 file diff -r cfef0237e937713a80494fad9f570ffaf441b30b -r 614bb548b53695323a83069fe6b4584f4a02f65d AUTHORS --- a/AUTHORS +++ b/AUTHORS @@ -3,50 +3,51 @@ Contributors include:: -Ronny Pfannschmidt +Anatoly Bubenkoff +Andreas Zeidler +Andy Freeland +Anthon van der Neut +Armin Rigo Benjamin Peterson -Floris Bruynooghe -Jason R. Coombs -Wouter van Ackooy -Samuele Pedroni -Anatoly Bubenkoff +Bob Ippolito +Brian Dorsey +Brian Okken Brianna Laugher Carl Friedrich Bolz -Armin Rigo -Maho -Jaap Broekhuizen -Maciek Fijalkowski -Guido Wesdorp -Brian Dorsey -Ross Lawley -Ralf Schmitt +Charles Cloud Chris Lamb -Harald Armin Massa -Martijn Faassen -Ian Bicking -Jan Balster -Grig Gheorghiu -Bob Ippolito +Christian Theunert Christian Tismer -Daniel Nuri -Graham Horler -Andreas Zeidler -Brian Okken -Katarzyna Jachim -Christian Theunert -Anthon van der Neut -Mark Abramowitz -Piotr Banaszkiewicz -Jurko Gospodneti? -Marc Schlaich Christopher Gilling Daniel Grana -Andy Freeland -Trevor Bekolay +Daniel Nuri +Dave Hunt David Mohr -Nicolas Delaby -Tom Viner -Dave Hunt -Charles Cloud Eduardo Schettino Florian Bruhin +Floris Bruynooghe +Graham Horler +Grig Gheorghiu +Guido Wesdorp +Harald Armin Massa +Ian Bicking +Jaap Broekhuizen +Jan Balster +Jason R. Coombs +Jurko Gospodneti? +Katarzyna Jachim +Maciek Fijalkowski +Maho +Marc Schlaich +Mark Abramowitz +Martijn Faassen +Nicolas Delaby +Piotr Banaszkiewicz +Ralf Schmitt +Ronny Pfannschmidt +Ross Lawley +Samuele Pedroni +Tom Viner +Trevor Bekolay +Wouter van Ackooy +Janne Vanhala Repository URL: https://bitbucket.org/pytest-dev/pytest/ -- This is a commit notification from bitbucket.org. You are receiving this because you have the service enabled, addressing the recipient of this email. From issues-reply at bitbucket.org Mon Jun 8 04:03:50 2015 From: issues-reply at bitbucket.org (Ned Batchelder) Date: Mon, 08 Jun 2015 02:03:50 -0000 Subject: [Pytest-commit] Issue #256: Provide a way to reduce the diagnostic output (hpk42/tox) Message-ID: <20150608020350.29517.64071@app07.ash-private.bitbucket.org> New issue 256: Provide a way to reduce the diagnostic output https://bitbucket.org/hpk42/tox/issue/256/provide-a-way-to-reduce-the-diagnostic Ned Batchelder: When I run my tests, tox prints a number of lines of information. This is helpful when things are going wrong, but I don't need to see them on every test run. ``` $ tox -epy27-django14 py27-django14 develop-inst-noop: /Users/ned/coverage/django_coverage_plugin py27-django14 installed: -f /Users/ned/Downloads/local_pypi,coverage==4.0a6,Django==1.4.20,-e git+git at github.com:nedbat/django_coverage_plugin.git at b337234512edc50f3962871bd03998da20d26446#egg=django_coverage_plugin-master,six==1.9.0,wheel==0.24.0 py27-django14 runtests: PYTHONHASHSEED='471377262' py27-django14 runtests: commands[0] | /Users/ned/coverage/django_coverage_plugin/.tox/py27-django14/bin/python -c import tests.banner CPython 2.7.6; Django 1.4.20 py27-django14 runtests: commands[1] | /Users/ned/coverage/django_coverage_plugin/.tox/py27-django14/bin/python -m unittest discover -b ...........................s......... ---------------------------------------------------------------------- Ran 37 tests in 0.301s OK (skipped=1) ``` Here there are 11 lines of output. Five are from tox, and I would like to suppress them. From issues-reply at bitbucket.org Mon Jun 8 16:19:33 2015 From: issues-reply at bitbucket.org (Tom V) Date: Mon, 08 Jun 2015 14:19:33 -0000 Subject: [Pytest-commit] Issue #764: --fixtures doesn't display all fixtures from conftest.py in sub directories (pytest-dev/pytest) Message-ID: <20150608141933.4566.83269@app13.ash-private.bitbucket.org> New issue 764: --fixtures doesn't display all fixtures from conftest.py in sub directories https://bitbucket.org/pytest-dev/pytest/issue/764/fixtures-doesnt-display-all-fixtures-from Tom V: Sample project to reproduce: https://github.com/tomviner/bug-report-pytest-fixtures-list Given I have a project with two app folders. Both apps have test folders with a conftest.py that both define a fixture. Also both apps have test files that use the respective app's fixture. i.e.: ``` (pytest-fixture-find)~/dev/pytest-fixture-find$ find -name conftest.py ./project/app1/tests/conftest.py ./project/app2/tests/conftest.py (pytest-fixture-find)~/dev/pytest-fixture-find$ ack-grep --py --context 1 "def fix" project/app1/tests/conftest.py 3- at pytest.fixture 4:def fixy1(): 5- pass project/app2/tests/conftest.py 3- at pytest.fixture 4:def fixy2(): 5- pass ``` Yet when I run `py.test --fixtures` or `py.test --fixtures --traceconfig` I only see one of those fixtures: ``` $ py.test --fixtures --traceconfig PLUGIN registered: <_pytest.python.FixtureManager instance at 0x7f2af6b43440> =============== test session starts =============== platform linux2 -- Python 2.7.8 -- py-1.4.28 -- pytest-2.7.1 using: pytest-2.7.1 pylib-1.4.28 ... rootdir: /home/me/dev/pytest-fixture-find, inifile: PLUGIN registered: PLUGIN registered: collected 2 items ... ------------- fixtures defined from conftest -------------- fixy1 project/app1/tests/conftest.py:4: no docstring available ================ in 0.01 seconds ============={} ``` `fixy2` is not shown. See https://github.com/tomviner/bug-report-pytest-fixtures-list for full output. From issues-reply at bitbucket.org Tue Jun 9 16:58:01 2015 From: issues-reply at bitbucket.org (pchambon) Date: Tue, 09 Jun 2015 14:58:01 -0000 Subject: [Pytest-commit] Issue #765: Re-allow simple class::test selection via "-k" (pytest-dev/pytest) Message-ID: <20150609145801.11074.59005@app05.ash-private.bitbucket.org> New issue 765: Re-allow simple class::test selection via "-k" https://bitbucket.org/pytest-dev/pytest/issue/765/re-allow-simple-class-test-selection-via-k pchambon: Long ago, it was possible to simply copy-paste the name of a failed test (in the form MyClass.my_test"), from the output of a run, to rerun only that tests thanks to the "-k" option. Now that the "-k" has much evolved, one is forced to rewrite with spaces and logical operators, which is quite unhandy when going through a heavy test suite with miscellaneous failing tests. I see that the new output uses double dots instead: "test_abstract_processing.py::TestUserRetriever::test_user_retrieval" wouldn't it then be a good idea to automatically recognize that alternative syntax in the "-k" option, or even in another option dedicated for that ? From commits-noreply at bitbucket.org Wed Jun 10 11:44:46 2015 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Wed, 10 Jun 2015 09:44:46 -0000 Subject: [Pytest-commit] commit/pytest: flub: Merged in RonnyPfannschmidt/pytest/regendoc-upgrade (pull request #303) Message-ID: <20150610094446.25603.11157@app07.ash-private.bitbucket.org> 1 new commit in pytest: https://bitbucket.org/pytest-dev/pytest/commits/d6e31c7a2be2/ Changeset: d6e31c7a2be2 User: flub Date: 2015-06-10 09:44:38+00:00 Summary: Merged in RonnyPfannschmidt/pytest/regendoc-upgrade (pull request #303) use regendoc normalization and regenerate docs Affected #: 20 files diff -r 614bb548b53695323a83069fe6b4584f4a02f65d -r d6e31c7a2be21b51b813c024d5444d5a96e496ea Makefile --- a/Makefile +++ b/Makefile @@ -1,6 +1,11 @@ # Set of targets useful for development/release process PYTHON = python2.7 PATH := $(PWD)/.env/bin:$(PATH) +REGENDOC_ARGS := \ + --normalize "/={8,} (.*) ={8,}/======= \1 ========/" \ + --normalize "/_{8,} (.*) _{8,}/_______ \1 ________/" \ + --normalize "/in \d+.\d+ seconds/in 0.12 seconds/" \ + --normalize "@/tmp/pytest-\d+/@/tmp/pytest-NaN/@" # prepare virtual python environment .env: @@ -16,10 +21,11 @@ # generate documentation docs: develop - find doc/en -name '*.txt' -not -path 'doc/en/_build/*' | xargs .env/bin/regendoc + find doc/en -name '*.txt' -not -path 'doc/en/_build/*' | xargs .env/bin/regendoc ${REGENDOC_ARGS} cd doc/en; make html # upload documentation upload-docs: develop - find doc/en -name '*.txt' -not -path 'doc/en/_build/*' | xargs .env/bin/regendoc --update - cd doc/en; make install + find doc/en -name '*.txt' -not -path 'doc/en/_build/*' | xargs .env/bin/regendoc ${REGENDOC_ARGS} --update + #cd doc/en; make install + diff -r 614bb548b53695323a83069fe6b4584f4a02f65d -r d6e31c7a2be21b51b813c024d5444d5a96e496ea doc/en/assert.txt --- a/doc/en/assert.txt +++ b/doc/en/assert.txt @@ -25,15 +25,15 @@ you will see the return value of the function call:: $ py.test test_assert1.py - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-87, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 1 items test_assert1.py F - ================================= FAILURES ================================= - ______________________________ test_function _______________________________ + ======= FAILURES ======== + _______ test_function ________ def test_function(): > assert f() == 4 @@ -41,7 +41,7 @@ E + where 3 = f() test_assert1.py:5: AssertionError - ========================= 1 failed in 0.01 seconds ========================= + ======= 1 failed in 0.12 seconds ======== ``pytest`` has support for showing the values of the most common subexpressions including calls, attributes, comparisons, and binary and unary @@ -135,15 +135,15 @@ if you run this module:: $ py.test test_assert2.py - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-87, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 1 items test_assert2.py F - ================================= FAILURES ================================= - ___________________________ test_set_comparison ____________________________ + ======= FAILURES ======== + _______ test_set_comparison ________ def test_set_comparison(): set1 = set("1308") @@ -157,7 +157,7 @@ E Use -v to get the full diff test_assert2.py:5: AssertionError - ========================= 1 failed in 0.01 seconds ========================= + ======= 1 failed in 0.12 seconds ======== Special comparisons are done for a number of cases: @@ -202,8 +202,8 @@ $ py.test -q test_foocompare.py F - ================================= FAILURES ================================= - _______________________________ test_compare _______________________________ + ======= FAILURES ======== + _______ test_compare ________ def test_compare(): f1 = Foo(1) @@ -213,7 +213,7 @@ E vals: 1 != 2 test_foocompare.py:8: AssertionError - 1 failed in 0.01 seconds + 1 failed in 0.12 seconds .. _assert-details: .. _`assert introspection`: diff -r 614bb548b53695323a83069fe6b4584f4a02f65d -r d6e31c7a2be21b51b813c024d5444d5a96e496ea doc/en/builtin.txt --- a/doc/en/builtin.txt +++ b/doc/en/builtin.txt @@ -115,4 +115,4 @@ directory. The returned object is a `py.path.local`_ path object. - in 0.00 seconds + in 0.12 seconds diff -r 614bb548b53695323a83069fe6b4584f4a02f65d -r d6e31c7a2be21b51b813c024d5444d5a96e496ea doc/en/capture.txt --- a/doc/en/capture.txt +++ b/doc/en/capture.txt @@ -63,24 +63,24 @@ of the failing function and hide the other one:: $ py.test - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-90, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 2 items test_module.py .F - ================================= FAILURES ================================= - ________________________________ test_func2 ________________________________ + ======= FAILURES ======== + _______ test_func2 ________ def test_func2(): > assert False E assert False test_module.py:9: AssertionError - -------------------------- Captured stdout setup --------------------------- - setting up - ==================== 1 failed, 1 passed in 0.01 seconds ==================== + ---------------------------- Captured stdout setup ----------------------------- + setting up + ======= 1 failed, 1 passed in 0.12 seconds ======== Accessing captured output from a test function --------------------------------------------------- diff -r 614bb548b53695323a83069fe6b4584f4a02f65d -r d6e31c7a2be21b51b813c024d5444d5a96e496ea doc/en/doctest.txt --- a/doc/en/doctest.txt +++ b/doc/en/doctest.txt @@ -43,14 +43,14 @@ then you can just invoke ``py.test`` without command line options:: $ py.test - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-96, inifile: pytest.ini + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini collected 1 items mymodule.py . - ========================= 1 passed in 0.06 seconds ========================= + ======= 1 passed in 0.12 seconds ======== It is possible to use fixtures using the ``getfixture`` helper:: diff -r 614bb548b53695323a83069fe6b4584f4a02f65d -r d6e31c7a2be21b51b813c024d5444d5a96e496ea doc/en/example/markers.txt --- a/doc/en/example/markers.txt +++ b/doc/en/example/markers.txt @@ -30,30 +30,30 @@ You can then restrict a test run to only run tests marked with ``webtest``:: $ py.test -v -m webtest - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 -- /tmp/sandbox/pytest/.tox/regen/bin/python3.4 - rootdir: /tmp/doc-exec-157, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 -- $PWD/.env/bin/python2.7 + rootdir: $REGENDOC_TMPDIR, inifile: collecting ... collected 4 items test_server.py::test_send_http PASSED - =================== 3 tests deselected by "-m 'webtest'" =================== - ================== 1 passed, 3 deselected in 0.01 seconds ================== + ======= 3 tests deselected by "-m 'webtest'" ======== + ======= 1 passed, 3 deselected in 0.12 seconds ======== Or the inverse, running all tests except the webtest ones:: $ py.test -v -m "not webtest" - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 -- /tmp/sandbox/pytest/.tox/regen/bin/python3.4 - rootdir: /tmp/doc-exec-157, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 -- $PWD/.env/bin/python2.7 + rootdir: $REGENDOC_TMPDIR, inifile: collecting ... collected 4 items test_server.py::test_something_quick PASSED test_server.py::test_another PASSED test_server.py::TestClass::test_method PASSED - ================= 1 tests deselected by "-m 'not webtest'" ================= - ================== 3 passed, 1 deselected in 0.01 seconds ================== + ======= 1 tests deselected by "-m 'not webtest'" ======== + ======= 3 passed, 1 deselected in 0.12 seconds ======== Selecting tests based on their node ID -------------------------------------- @@ -63,39 +63,39 @@ tests based on their module, class, method, or function name:: $ py.test -v test_server.py::TestClass::test_method - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 -- /tmp/sandbox/pytest/.tox/regen/bin/python3.4 - rootdir: /tmp/doc-exec-157, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 -- $PWD/.env/bin/python2.7 + rootdir: $REGENDOC_TMPDIR, inifile: collecting ... collected 5 items test_server.py::TestClass::test_method PASSED - ========================= 1 passed in 0.01 seconds ========================= + ======= 1 passed in 0.12 seconds ======== You can also select on the class:: $ py.test -v test_server.py::TestClass - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 -- /tmp/sandbox/pytest/.tox/regen/bin/python3.4 - rootdir: /tmp/doc-exec-157, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 -- $PWD/.env/bin/python2.7 + rootdir: $REGENDOC_TMPDIR, inifile: collecting ... collected 4 items test_server.py::TestClass::test_method PASSED - ========================= 1 passed in 0.01 seconds ========================= + ======= 1 passed in 0.12 seconds ======== Or select multiple nodes:: $ py.test -v test_server.py::TestClass test_server.py::test_send_http - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 -- /tmp/sandbox/pytest/.tox/regen/bin/python3.4 - rootdir: /tmp/doc-exec-157, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 -- $PWD/.env/bin/python2.7 + rootdir: $REGENDOC_TMPDIR, inifile: collecting ... collected 8 items test_server.py::TestClass::test_method PASSED test_server.py::test_send_http PASSED - ========================= 2 passed in 0.01 seconds ========================= + ======= 2 passed in 0.12 seconds ======== .. _node-id: @@ -124,44 +124,44 @@ select tests based on their names:: $ py.test -v -k http # running with the above defined example module - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 -- /tmp/sandbox/pytest/.tox/regen/bin/python3.4 - rootdir: /tmp/doc-exec-157, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 -- $PWD/.env/bin/python2.7 + rootdir: $REGENDOC_TMPDIR, inifile: collecting ... collected 4 items test_server.py::test_send_http PASSED - ====================== 3 tests deselected by '-khttp' ====================== - ================== 1 passed, 3 deselected in 0.01 seconds ================== + ======= 3 tests deselected by '-khttp' ======== + ======= 1 passed, 3 deselected in 0.12 seconds ======== And you can also run all tests except the ones that match the keyword:: $ py.test -k "not send_http" -v - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 -- /tmp/sandbox/pytest/.tox/regen/bin/python3.4 - rootdir: /tmp/doc-exec-157, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 -- $PWD/.env/bin/python2.7 + rootdir: $REGENDOC_TMPDIR, inifile: collecting ... collected 4 items test_server.py::test_something_quick PASSED test_server.py::test_another PASSED test_server.py::TestClass::test_method PASSED - ================= 1 tests deselected by '-knot send_http' ================== - ================== 3 passed, 1 deselected in 0.01 seconds ================== + ======= 1 tests deselected by '-knot send_http' ======== + ======= 3 passed, 1 deselected in 0.12 seconds ======== Or to select "http" and "quick" tests:: $ py.test -k "http or quick" -v - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 -- /tmp/sandbox/pytest/.tox/regen/bin/python3.4 - rootdir: /tmp/doc-exec-157, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 -- $PWD/.env/bin/python2.7 + rootdir: $REGENDOC_TMPDIR, inifile: collecting ... collected 4 items test_server.py::test_send_http PASSED test_server.py::test_something_quick PASSED - ================= 2 tests deselected by '-khttp or quick' ================== - ================== 2 passed, 2 deselected in 0.01 seconds ================== + ======= 2 tests deselected by '-khttp or quick' ======== + ======= 2 passed, 2 deselected in 0.12 seconds ======== .. note:: @@ -201,9 +201,9 @@ @pytest.mark.usefixtures(fixturename1, fixturename2, ...): mark tests as needing all of the specified fixtures. see http://pytest.org/latest/fixture.html#usefixtures - @pytest.hookimpl(tryfirst=True): mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible. + @pytest.mark.tryfirst: mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible. - @pytest.hookimpl(trylast=True): mark a hook implementation function such that the plugin machinery will try to call it last/as late as possible. + @pytest.mark.trylast: mark a hook implementation function such that the plugin machinery will try to call it last/as late as possible. For an example on how to add and work with markers from a plugin, see @@ -341,26 +341,26 @@ the test needs:: $ py.test -E stage2 - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-157, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 1 items test_someenv.py s - ======================== 1 skipped in 0.01 seconds ========================= + ======= 1 skipped in 0.12 seconds ======== and here is one that specifies exactly the environment needed:: $ py.test -E stage1 - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-157, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 1 items test_someenv.py . - ========================= 1 passed in 0.01 seconds ========================= + ======= 1 passed in 0.12 seconds ======== The ``--markers`` option always gives you a list of available markers:: @@ -375,9 +375,9 @@ @pytest.mark.usefixtures(fixturename1, fixturename2, ...): mark tests as needing all of the specified fixtures. see http://pytest.org/latest/fixture.html#usefixtures - @pytest.hookimpl(tryfirst=True): mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible. + @pytest.mark.tryfirst: mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible. - @pytest.hookimpl(trylast=True): mark a hook implementation function such that the plugin machinery will try to call it last/as late as possible. + @pytest.mark.trylast: mark a hook implementation function such that the plugin machinery will try to call it last/as late as possible. Reading markers which were set from multiple places @@ -420,7 +420,7 @@ glob args=('class',) kwargs={'x': 2} glob args=('module',) kwargs={'x': 1} . - 1 passed in 0.01 seconds + 1 passed in 0.12 seconds marking platform specific tests with pytest -------------------------------------------------------------- @@ -472,29 +472,29 @@ then you will see two test skipped and two executed tests as expected:: $ py.test -rs # this option reports skip reasons - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-157, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 4 items - test_plat.py sss. - ========================= short test summary info ========================== - SKIP [3] /tmp/doc-exec-157/conftest.py:12: cannot run on platform linux + test_plat.py s.s. + ======= short test summary info ======== + SKIP [2] $REGENDOC_TMPDIR/conftest.py:12: cannot run on platform linux2 - =================== 1 passed, 3 skipped in 0.01 seconds ==================== + ======= 2 passed, 2 skipped in 0.12 seconds ======== Note that if you specify a platform via the marker-command line option like this:: $ py.test -m linux2 - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-157, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 4 items - test_plat.py s + test_plat.py . - =================== 3 tests deselected by "-m 'linux2'" ==================== - ================= 1 skipped, 3 deselected in 0.01 seconds ================== + ======= 3 tests deselected by "-m 'linux2'" ======== + ======= 1 passed, 3 deselected in 0.12 seconds ======== then the unmarked-tests will not be run. It is thus a way to restrict the run to the specific tests. @@ -538,47 +538,47 @@ We can now use the ``-m option`` to select one set:: $ py.test -m interface --tb=short - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-157, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 4 items test_module.py FF - ================================= FAILURES ================================= - __________________________ test_interface_simple ___________________________ + ======= FAILURES ======== + _______ test_interface_simple ________ test_module.py:3: in test_interface_simple assert 0 E assert 0 - __________________________ test_interface_complex __________________________ + _______ test_interface_complex ________ test_module.py:6: in test_interface_complex assert 0 E assert 0 - ================== 2 tests deselected by "-m 'interface'" ================== - ================== 2 failed, 2 deselected in 0.02 seconds ================== + ======= 2 tests deselected by "-m 'interface'" ======== + ======= 2 failed, 2 deselected in 0.12 seconds ======== or to select both "event" and "interface" tests:: $ py.test -m "interface or event" --tb=short - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-157, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 4 items test_module.py FFF - ================================= FAILURES ================================= - __________________________ test_interface_simple ___________________________ + ======= FAILURES ======== + _______ test_interface_simple ________ test_module.py:3: in test_interface_simple assert 0 E assert 0 - __________________________ test_interface_complex __________________________ + _______ test_interface_complex ________ test_module.py:6: in test_interface_complex assert 0 E assert 0 - ____________________________ test_event_simple _____________________________ + _______ test_event_simple ________ test_module.py:9: in test_event_simple assert 0 E assert 0 - ============= 1 tests deselected by "-m 'interface or event'" ============== - ================== 3 failed, 1 deselected in 0.02 seconds ================== + ======= 1 tests deselected by "-m 'interface or event'" ======== + ======= 3 failed, 1 deselected in 0.12 seconds ======== diff -r 614bb548b53695323a83069fe6b4584f4a02f65d -r d6e31c7a2be21b51b813c024d5444d5a96e496ea doc/en/example/nonpython.txt --- a/doc/en/example/nonpython.txt +++ b/doc/en/example/nonpython.txt @@ -26,19 +26,19 @@ now execute the test specification:: nonpython $ py.test test_simple.yml - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/sandbox/pytest/doc/en, inifile: pytest.ini + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $PWD/doc/en, inifile: pytest.ini collected 2 items test_simple.yml .F - ================================= FAILURES ================================= - ______________________________ usecase: hello ______________________________ + ======= FAILURES ======== + _______ usecase: hello ________ usecase execution failed spec failed: 'some': 'other' no further details known at this point. - ==================== 1 failed, 1 passed in 0.19 seconds ==================== + ======= 1 failed, 1 passed in 0.12 seconds ======== You get one dot for the passing ``sub1: sub1`` check and one failure. Obviously in the above ``conftest.py`` you'll want to implement a more @@ -56,31 +56,31 @@ consulted when reporting in ``verbose`` mode:: nonpython $ py.test -v - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 -- /tmp/sandbox/pytest/.tox/regen/bin/python3.4 - rootdir: /tmp/sandbox/pytest/doc/en, inifile: pytest.ini + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 -- $PWD/.env/bin/python2.7 + rootdir: $PWD/doc/en, inifile: pytest.ini collecting ... collected 2 items test_simple.yml::ok PASSED test_simple.yml::hello FAILED - ================================= FAILURES ================================= - ______________________________ usecase: hello ______________________________ + ======= FAILURES ======== + _______ usecase: hello ________ usecase execution failed spec failed: 'some': 'other' no further details known at this point. - ==================== 1 failed, 1 passed in 0.05 seconds ==================== + ======= 1 failed, 1 passed in 0.12 seconds ======== While developing your custom test collection and execution it's also interesting to just look at the collection tree:: nonpython $ py.test --collect-only - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/sandbox/pytest/doc/en, inifile: pytest.ini + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $PWD/doc/en, inifile: pytest.ini collected 2 items - ============================= in 0.04 seconds ============================= + ======= in 0.12 seconds ======== diff -r 614bb548b53695323a83069fe6b4584f4a02f65d -r d6e31c7a2be21b51b813c024d5444d5a96e496ea doc/en/example/parametrize.txt --- a/doc/en/example/parametrize.txt +++ b/doc/en/example/parametrize.txt @@ -46,15 +46,15 @@ $ py.test -q test_compute.py .. - 2 passed in 0.01 seconds + 2 passed in 0.12 seconds We run only two computations, so we see two dots. let's run the full monty:: $ py.test -q --all ....F - ================================= FAILURES ================================= - _____________________________ test_compute[4] ______________________________ + ======= FAILURES ======== + _______ test_compute[4] ________ param1 = 4 @@ -63,7 +63,7 @@ E assert 4 < 4 test_compute.py:3: AssertionError - 1 failed, 4 passed in 0.02 seconds + 1 failed, 4 passed in 0.12 seconds As expected when running the full range of ``param1`` values we'll get an error on the last one. @@ -126,11 +126,11 @@ $ py.test test_time.py --collect-only - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-159, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: - ============================= in 0.00 seconds ============================= + ======= in 0.12 seconds ======== ERROR: file not found: test_time.py A quick port of "testscenarios" @@ -170,22 +170,22 @@ this is a fully self-contained example which you can run with:: $ py.test test_scenarios.py - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-159, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 4 items test_scenarios.py .... - ========================= 4 passed in 0.02 seconds ========================= + ======= 4 passed in 0.12 seconds ======== If you just collect tests you'll also nicely see 'advanced' and 'basic' as variants for the test function:: $ py.test --collect-only test_scenarios.py - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-159, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 4 items @@ -195,7 +195,7 @@ - ============================= in 0.01 seconds ============================= + ======= in 0.12 seconds ======== Note that we told ``metafunc.parametrize()`` that your scenario values should be considered class-scoped. With pytest-2.3 this leads to a @@ -248,24 +248,24 @@ Let's first see how it looks like at collection time:: $ py.test test_backends.py --collect-only - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-159, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 2 items - ============================= in 0.01 seconds ============================= + ======= in 0.12 seconds ======== And then when we run the test:: $ py.test -q test_backends.py .F - ================================= FAILURES ================================= - _________________________ test_db_initialized[d2] __________________________ + ======= FAILURES ======== + _______ test_db_initialized[d2] ________ - db = + db = def test_db_initialized(db): # a dummy test @@ -274,7 +274,7 @@ E Failed: deliberately failing for demo purposes test_backends.py:6: Failed - 1 failed, 1 passed in 0.01 seconds + 1 failed, 1 passed in 0.12 seconds The first invocation with ``db == "DB1"`` passed while the second with ``db == "DB2"`` failed. Our ``db`` fixture function has instantiated each of the DB values during the setup phase while the ``pytest_generate_tests`` generated two according calls to the ``test_db_initialized`` during the collection phase. @@ -318,17 +318,17 @@ $ py.test -q F.. - ================================= FAILURES ================================= - ________________________ TestClass.test_equals[2-1] ________________________ + ======= FAILURES ======== + _______ TestClass.test_equals[1-2] ________ - self = , a = 1, b = 2 + self = , a = 1, b = 2 def test_equals(self, a, b): > assert a == b E assert 1 == 2 test_parametrize.py:18: AssertionError - 1 failed, 2 passed in 0.02 seconds + 1 failed, 2 passed in 0.12 seconds Indirect parametrization with multiple fixtures -------------------------------------------------------------- @@ -347,8 +347,11 @@ Running it results in some skips if we don't have all the python interpreters installed and otherwise runs all combinations (5 interpreters times 5 interpreters times 3 objects to serialize/deserialize):: . $ py.test -rs -q multipython.py - ........................... - 27 passed in 4.14 seconds + ssssssssssss...ssssssssssss + ======= short test summary info ======== + SKIP [12] $PWD/doc/en/example/multipython.py:22: 'python3.3' not found + SKIP [12] $PWD/doc/en/example/multipython.py:22: 'python2.6' not found + 3 passed, 24 skipped in 0.12 seconds Indirect parametrization of optional implementations/imports -------------------------------------------------------------------- @@ -394,16 +397,16 @@ If you run this with reporting for skips enabled:: $ py.test -rs test_module.py - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-159, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 2 items test_module.py .s - ========================= short test summary info ========================== - SKIP [1] /tmp/doc-exec-159/conftest.py:10: could not import 'opt2' + ======= short test summary info ======== + SKIP [1] $REGENDOC_TMPDIR/conftest.py:10: could not import 'opt2' - =================== 1 passed, 1 skipped in 0.01 seconds ==================== + ======= 1 passed, 1 skipped in 0.12 seconds ======== You'll see that we don't have a ``opt2`` module and thus the second test run of our ``test_func1`` was skipped. A few notes: diff -r 614bb548b53695323a83069fe6b4584f4a02f65d -r d6e31c7a2be21b51b813c024d5444d5a96e496ea doc/en/example/pythoncollection.txt --- a/doc/en/example/pythoncollection.txt +++ b/doc/en/example/pythoncollection.txt @@ -42,9 +42,9 @@ then the test collection looks like this:: $ py.test --collect-only - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-160, inifile: setup.cfg + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: setup.cfg collected 2 items @@ -52,7 +52,7 @@ - ============================= in 0.01 seconds ============================= + ======= in 0.12 seconds ======== .. note:: @@ -88,9 +88,9 @@ You can always peek at the collection tree without running tests like this:: . $ py.test --collect-only pythoncollection.py - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/sandbox/pytest/doc/en, inifile: pytest.ini + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $PWD/doc/en, inifile: pytest.ini collected 3 items @@ -99,7 +99,7 @@ - ============================= in 0.01 seconds ============================= + ======= in 0.12 seconds ======== customizing test collection to find all .py files --------------------------------------------------------- @@ -142,12 +142,14 @@ interpreters and will leave out the setup.py file:: $ py.test --collect-only - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-160, inifile: pytest.ini - collected 0 items + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini + collected 1 items + + - ============================= in 0.01 seconds ============================= + ======= in 0.12 seconds ======== If you run with a Python3 interpreter the moduled added through the conftest.py file will not be considered for test collection. diff -r 614bb548b53695323a83069fe6b4584f4a02f65d -r d6e31c7a2be21b51b813c024d5444d5a96e496ea doc/en/example/reportingdemo.txt --- a/doc/en/example/reportingdemo.txt +++ b/doc/en/example/reportingdemo.txt @@ -12,15 +12,15 @@ .. code-block:: python assertion $ py.test failure_demo.py - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/sandbox/pytest/doc/en, inifile: pytest.ini + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $PWD/doc/en, inifile: pytest.ini collected 42 items failure_demo.py FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF - ================================= FAILURES ================================= - ____________________________ test_generative[0] ____________________________ + ======= FAILURES ======== + _______ test_generative[0] ________ param1 = 3, param2 = 6 @@ -29,9 +29,9 @@ E assert (3 * 2) < 6 failure_demo.py:15: AssertionError - _________________________ TestFailing.test_simple __________________________ + _______ TestFailing.test_simple ________ - self = + self = def test_simple(self): def f(): @@ -41,13 +41,13 @@ > assert f() == g() E assert 42 == 43 - E + where 42 = .f at 0x7f65f2315510>() - E + and 43 = .g at 0x7f65f2323510>() + E + where 42 = () + E + and 43 = () failure_demo.py:28: AssertionError - ____________________ TestFailing.test_simple_multiline _____________________ + _______ TestFailing.test_simple_multiline ________ - self = + self = def test_simple_multiline(self): otherfunc_multi( @@ -55,7 +55,7 @@ > 6*9) failure_demo.py:33: - _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ + _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ a = 42, b = 54 @@ -65,21 +65,21 @@ E assert 42 == 54 failure_demo.py:11: AssertionError - ___________________________ TestFailing.test_not ___________________________ + _______ TestFailing.test_not ________ - self = + self = def test_not(self): def f(): return 42 > assert not f() E assert not 42 - E + where 42 = .f at 0x7f65f2323598>() + E + where 42 = () failure_demo.py:38: AssertionError - _________________ TestSpecialisedExplanations.test_eq_text _________________ + _______ TestSpecialisedExplanations.test_eq_text ________ - self = + self = def test_eq_text(self): > assert 'spam' == 'eggs' @@ -88,9 +88,9 @@ E + eggs failure_demo.py:42: AssertionError - _____________ TestSpecialisedExplanations.test_eq_similar_text _____________ + _______ TestSpecialisedExplanations.test_eq_similar_text ________ - self = + self = def test_eq_similar_text(self): > assert 'foo 1 bar' == 'foo 2 bar' @@ -101,9 +101,9 @@ E ? ^ failure_demo.py:45: AssertionError - ____________ TestSpecialisedExplanations.test_eq_multiline_text ____________ + _______ TestSpecialisedExplanations.test_eq_multiline_text ________ - self = + self = def test_eq_multiline_text(self): > assert 'foo\nspam\nbar' == 'foo\neggs\nbar' @@ -114,9 +114,9 @@ E bar failure_demo.py:48: AssertionError - ______________ TestSpecialisedExplanations.test_eq_long_text _______________ + _______ TestSpecialisedExplanations.test_eq_long_text ________ - self = + self = def test_eq_long_text(self): a = '1'*100 + 'a' + '2'*100 @@ -131,9 +131,9 @@ E ? ^ failure_demo.py:53: AssertionError - _________ TestSpecialisedExplanations.test_eq_long_text_multiline __________ + _______ TestSpecialisedExplanations.test_eq_long_text_multiline ________ - self = + self = def test_eq_long_text_multiline(self): a = '1\n'*100 + 'a' + '2\n'*100 @@ -155,9 +155,9 @@ E 2 failure_demo.py:58: AssertionError - _________________ TestSpecialisedExplanations.test_eq_list _________________ + _______ TestSpecialisedExplanations.test_eq_list ________ - self = + self = def test_eq_list(self): > assert [0, 1, 2] == [0, 1, 3] @@ -166,9 +166,9 @@ E Use -v to get the full diff failure_demo.py:61: AssertionError - ______________ TestSpecialisedExplanations.test_eq_list_long _______________ + _______ TestSpecialisedExplanations.test_eq_list_long ________ - self = + self = def test_eq_list_long(self): a = [0]*100 + [1] + [3]*100 @@ -179,9 +179,9 @@ E Use -v to get the full diff failure_demo.py:66: AssertionError - _________________ TestSpecialisedExplanations.test_eq_dict _________________ + _______ TestSpecialisedExplanations.test_eq_dict ________ - self = + self = def test_eq_dict(self): > assert {'a': 0, 'b': 1, 'c': 0} == {'a': 0, 'b': 2, 'd': 0} @@ -196,9 +196,9 @@ E Use -v to get the full diff failure_demo.py:69: AssertionError - _________________ TestSpecialisedExplanations.test_eq_set __________________ + _______ TestSpecialisedExplanations.test_eq_set ________ - self = + self = def test_eq_set(self): > assert set([0, 10, 11, 12]) == set([0, 20, 21]) @@ -213,9 +213,9 @@ E Use -v to get the full diff failure_demo.py:72: AssertionError - _____________ TestSpecialisedExplanations.test_eq_longer_list ______________ + _______ TestSpecialisedExplanations.test_eq_longer_list ________ - self = + self = def test_eq_longer_list(self): > assert [1,2] == [1,2,3] @@ -224,18 +224,18 @@ E Use -v to get the full diff failure_demo.py:75: AssertionError - _________________ TestSpecialisedExplanations.test_in_list _________________ + _______ TestSpecialisedExplanations.test_in_list ________ - self = + self = def test_in_list(self): > assert 1 in [0, 2, 3, 4, 5] E assert 1 in [0, 2, 3, 4, 5] failure_demo.py:78: AssertionError - __________ TestSpecialisedExplanations.test_not_in_text_multiline __________ + _______ TestSpecialisedExplanations.test_not_in_text_multiline ________ - self = + self = def test_not_in_text_multiline(self): text = 'some multiline\ntext\nwhich\nincludes foo\nand a\ntail' @@ -251,9 +251,9 @@ E tail failure_demo.py:82: AssertionError - ___________ TestSpecialisedExplanations.test_not_in_text_single ____________ + _______ TestSpecialisedExplanations.test_not_in_text_single ________ - self = + self = def test_not_in_text_single(self): text = 'single foo line' @@ -264,9 +264,9 @@ E ? +++ failure_demo.py:86: AssertionError - _________ TestSpecialisedExplanations.test_not_in_text_single_long _________ + _______ TestSpecialisedExplanations.test_not_in_text_single_long ________ - self = + self = def test_not_in_text_single_long(self): text = 'head ' * 50 + 'foo ' + 'tail ' * 20 @@ -277,9 +277,9 @@ E ? +++ failure_demo.py:90: AssertionError - ______ TestSpecialisedExplanations.test_not_in_text_single_long_term _______ + _______ TestSpecialisedExplanations.test_not_in_text_single_long_term ________ - self = + self = def test_not_in_text_single_long_term(self): text = 'head ' * 50 + 'f'*70 + 'tail ' * 20 @@ -290,7 +290,7 @@ E ? ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ failure_demo.py:94: AssertionError - ______________________________ test_attribute ______________________________ + _______ test_attribute ________ def test_attribute(): class Foo(object): @@ -298,21 +298,21 @@ i = Foo() > assert i.b == 2 E assert 1 == 2 - E + where 1 = .Foo object at 0x7f65f1c814e0>.b + E + where 1 = .b failure_demo.py:101: AssertionError - _________________________ test_attribute_instance __________________________ + _______ test_attribute_instance ________ def test_attribute_instance(): class Foo(object): b = 1 > assert Foo().b == 2 E assert 1 == 2 - E + where 1 = .Foo object at 0x7f65f1c7f7f0>.b - E + where .Foo object at 0x7f65f1c7f7f0> = .Foo'>() + E + where 1 = .b + E + where = () failure_demo.py:107: AssertionError - __________________________ test_attribute_failure __________________________ + _______ test_attribute_failure ________ def test_attribute_failure(): class Foo(object): @@ -323,16 +323,16 @@ > assert i.b == 2 failure_demo.py:116: - _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ + _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ - self = .Foo object at 0x7f65f1c97dd8> + self = def _get_b(self): > raise Exception('Failed to get attrib') E Exception: Failed to get attrib failure_demo.py:113: Exception - _________________________ test_attribute_multiple __________________________ + _______ test_attribute_multiple ________ def test_attribute_multiple(): class Foo(object): @@ -341,57 +341,57 @@ b = 2 > assert Foo().b == Bar().b E assert 1 == 2 - E + where 1 = .Foo object at 0x7f65f1c9b630>.b - E + where .Foo object at 0x7f65f1c9b630> = .Foo'>() - E + and 2 = .Bar object at 0x7f65f1c9b2b0>.b - E + where .Bar object at 0x7f65f1c9b2b0> = .Bar'>() + E + where 1 = .b + E + where = () + E + and 2 = .b + E + where = () failure_demo.py:124: AssertionError - __________________________ TestRaises.test_raises __________________________ + _______ TestRaises.test_raises ________ - self = + self = def test_raises(self): s = 'qwe' > raises(TypeError, "int(s)") failure_demo.py:133: - _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ + _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > int(s) E ValueError: invalid literal for int() with base 10: 'qwe' - <0-codegen /tmp/sandbox/pytest/.tox/regen/lib/python3.4/site-packages/_pytest/python.py:1075>:1: ValueError - ______________________ TestRaises.test_raises_doesnt _______________________ + <0-codegen $PWD/_pytest/python.py:1091>:1: ValueError + _______ TestRaises.test_raises_doesnt ________ - self = + self = def test_raises_doesnt(self): > raises(IOError, "int('3')") E Failed: DID NOT RAISE failure_demo.py:136: Failed - __________________________ TestRaises.test_raise ___________________________ + _______ TestRaises.test_raise ________ - self = + self = def test_raise(self): > raise ValueError("demo error") E ValueError: demo error failure_demo.py:139: ValueError - ________________________ TestRaises.test_tupleerror ________________________ + _______ TestRaises.test_tupleerror ________ - self = + self = def test_tupleerror(self): > a,b = [1] E ValueError: need more than 1 value to unpack failure_demo.py:142: ValueError - ______ TestRaises.test_reinterpret_fails_with_print_for_the_fun_of_it ______ + _______ TestRaises.test_reinterpret_fails_with_print_for_the_fun_of_it ________ - self = + self = def test_reinterpret_fails_with_print_for_the_fun_of_it(self): l = [1,2,3] @@ -400,18 +400,18 @@ E TypeError: 'int' object is not iterable failure_demo.py:147: TypeError - --------------------------- Captured stdout call --------------------------- + ----------------------------- Captured stdout call ----------------------------- l is [1, 2, 3] - ________________________ TestRaises.test_some_error ________________________ + _______ TestRaises.test_some_error ________ - self = + self = def test_some_error(self): > if namenotexi: - E NameError: name 'namenotexi' is not defined + E NameError: global name 'namenotexi' is not defined failure_demo.py:150: NameError - ____________________ test_dynamic_compile_shows_nicely _____________________ + _______ test_dynamic_compile_shows_nicely ________ def test_dynamic_compile_shows_nicely(): src = 'def foo():\n assert 1 == 0\n' @@ -423,16 +423,16 @@ > module.foo() failure_demo.py:165: - _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ + _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ def foo(): > assert 1 == 0 E assert 1 == 0 - <2-codegen 'abc-123' /tmp/sandbox/pytest/doc/en/example/assertion/failure_demo.py:162>:2: AssertionError - ____________________ TestMoreErrors.test_complex_error _____________________ + <2-codegen 'abc-123' $PWD/doc/en/example/assertion/failure_demo.py:162>:2: AssertionError + _______ TestMoreErrors.test_complex_error ________ - self = + self = def test_complex_error(self): def f(): @@ -442,10 +442,10 @@ > somefunc(f(), g()) failure_demo.py:175: - _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ + _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ failure_demo.py:8: in somefunc otherfunc(x,y) - _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ + _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ a = 44, b = 43 @@ -454,9 +454,9 @@ E assert 44 == 43 failure_demo.py:5: AssertionError - ___________________ TestMoreErrors.test_z1_unpack_error ____________________ + _______ TestMoreErrors.test_z1_unpack_error ________ - self = + self = def test_z1_unpack_error(self): l = [] @@ -464,9 +464,9 @@ E ValueError: need more than 0 values to unpack failure_demo.py:179: ValueError - ____________________ TestMoreErrors.test_z2_type_error _____________________ + _______ TestMoreErrors.test_z2_type_error ________ - self = + self = def test_z2_type_error(self): l = 3 @@ -474,21 +474,21 @@ E TypeError: 'int' object is not iterable failure_demo.py:183: TypeError - ______________________ TestMoreErrors.test_startswith ______________________ + _______ TestMoreErrors.test_startswith ________ - self = + self = def test_startswith(self): s = "123" g = "456" > assert s.startswith(g) - E assert ('456') - E + where = '123'.startswith + E assert ('456') + E + where = '123'.startswith failure_demo.py:188: AssertionError - __________________ TestMoreErrors.test_startswith_nested ___________________ + _______ TestMoreErrors.test_startswith_nested ________ - self = + self = def test_startswith_nested(self): def f(): @@ -496,15 +496,15 @@ def g(): return "456" > assert f().startswith(g()) - E assert ('456') - E + where = '123'.startswith - E + where '123' = .f at 0x7f65f1c32950>() - E + and '456' = .g at 0x7f65f1c32ea0>() + E assert ('456') + E + where = '123'.startswith + E + where '123' = () + E + and '456' = () failure_demo.py:195: AssertionError - _____________________ TestMoreErrors.test_global_func ______________________ + _______ TestMoreErrors.test_global_func ________ - self = + self = def test_global_func(self): > assert isinstance(globf(42), float) @@ -512,20 +512,20 @@ E + where 43 = globf(42) failure_demo.py:198: AssertionError - _______________________ TestMoreErrors.test_instance _______________________ + _______ TestMoreErrors.test_instance ________ - self = + self = def test_instance(self): self.x = 6*7 > assert self.x != 42 E assert 42 != 42 - E + where 42 = .x + E + where 42 = .x failure_demo.py:202: AssertionError - _______________________ TestMoreErrors.test_compare ________________________ + _______ TestMoreErrors.test_compare ________ - self = + self = def test_compare(self): > assert globf(10) < 5 @@ -533,9 +533,9 @@ E + where 11 = globf(10) failure_demo.py:205: AssertionError - _____________________ TestMoreErrors.test_try_finally ______________________ + _______ TestMoreErrors.test_try_finally ________ - self = + self = def test_try_finally(self): x = 1 @@ -544,9 +544,9 @@ E assert 1 == 0 failure_demo.py:210: AssertionError - ___________________ TestCustomAssertMsg.test_single_line ___________________ + _______ TestCustomAssertMsg.test_single_line ________ - self = + self = def test_single_line(self): class A: @@ -555,12 +555,12 @@ > assert A.a == b, "A.a appears not to be b" E AssertionError: A.a appears not to be b E assert 1 == 2 - E + where 1 = .A'>.a + E + where 1 = .a failure_demo.py:221: AssertionError - ____________________ TestCustomAssertMsg.test_multiline ____________________ + _______ TestCustomAssertMsg.test_multiline ________ - self = + self = def test_multiline(self): class A: @@ -572,12 +572,12 @@ E or does not appear to be b E one of those E assert 1 == 2 - E + where 1 = .A'>.a + E + where 1 = .a failure_demo.py:227: AssertionError - ___________________ TestCustomAssertMsg.test_custom_repr ___________________ + _______ TestCustomAssertMsg.test_custom_repr ________ - self = + self = def test_custom_repr(self): class JSON: @@ -595,4 +595,4 @@ E + where 1 = This is JSON\n{\n 'foo': 'bar'\n}.a failure_demo.py:237: AssertionError - ======================== 42 failed in 0.35 seconds ========================= + ======= 42 failed in 0.12 seconds ======== diff -r 614bb548b53695323a83069fe6b4584f4a02f65d -r d6e31c7a2be21b51b813c024d5444d5a96e496ea doc/en/example/simple.txt --- a/doc/en/example/simple.txt +++ b/doc/en/example/simple.txt @@ -39,11 +39,11 @@ $ py.test -q test_sample.py F - ================================= FAILURES ================================= - _______________________________ test_answer ________________________________ - + ======= FAILURES ======== + _______ test_answer ________ + cmdopt = 'type1' - + def test_answer(cmdopt): if cmdopt == "type1": print ("first") @@ -51,21 +51,21 @@ print ("second") > assert 0 # to see what was printed E assert 0 - + test_sample.py:6: AssertionError - --------------------------- Captured stdout call --------------------------- + ----------------------------- Captured stdout call ----------------------------- first - 1 failed in 0.01 seconds + 1 failed in 0.12 seconds And now with supplying a command line option:: $ py.test -q --cmdopt=type2 F - ================================= FAILURES ================================= - _______________________________ test_answer ________________________________ - + ======= FAILURES ======== + _______ test_answer ________ + cmdopt = 'type2' - + def test_answer(cmdopt): if cmdopt == "type1": print ("first") @@ -73,11 +73,11 @@ print ("second") > assert 0 # to see what was printed E assert 0 - + test_sample.py:6: AssertionError - --------------------------- Captured stdout call --------------------------- + ----------------------------- Captured stdout call ----------------------------- second - 1 failed in 0.01 seconds + 1 failed in 0.12 seconds You can see that the command line option arrived in our test. This completes the basic pattern. However, one often rather wants to process @@ -107,12 +107,12 @@ directory with the above conftest.py:: $ py.test - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-162, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 0 items - - ============================= in 0.00 seconds ============================= + + ======= in 0.12 seconds ======== .. _`excontrolskip`: @@ -152,28 +152,28 @@ and when running it will see a skipped "slow" test:: $ py.test -rs # "-rs" means report details on the little 's' - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-162, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 2 items - + test_module.py .s - ========================= short test summary info ========================== - SKIP [1] /tmp/doc-exec-162/conftest.py:9: need --runslow option to run - - =================== 1 passed, 1 skipped in 0.01 seconds ==================== + ======= short test summary info ======== + SKIP [1] $REGENDOC_TMPDIR/conftest.py:9: need --runslow option to run + + ======= 1 passed, 1 skipped in 0.12 seconds ======== Or run it including the ``slow`` marked test:: $ py.test --runslow - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-162, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 2 items - + test_module.py .. - - ========================= 2 passed in 0.01 seconds ========================= + + ======= 2 passed in 0.12 seconds ======== Writing well integrated assertion helpers -------------------------------------------------- @@ -203,15 +203,15 @@ $ py.test -q test_checkconfig.py F - ================================= FAILURES ================================= - ______________________________ test_something ______________________________ - + ======= FAILURES ======== + _______ test_something ________ + def test_something(): > checkconfig(42) E Failed: not configured: 42 - + test_checkconfig.py:8: Failed - 1 failed in 0.02 seconds + 1 failed in 0.12 seconds Detect if running from within a pytest run -------------------------------------------------------------- @@ -258,13 +258,13 @@ which will add the string to the test header accordingly:: $ py.test - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-162, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 project deps: mylib-1.1 + rootdir: $REGENDOC_TMPDIR, inifile: collected 0 items - - ============================= in 0.00 seconds ============================= + + ======= in 0.12 seconds ======== .. regendoc:wipe @@ -282,24 +282,24 @@ which will add info only when run with "--v":: $ py.test -v - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 -- /tmp/sandbox/pytest/.tox/regen/bin/python3.4 - rootdir: /tmp/doc-exec-162, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 -- $PWD/.env/bin/python2.7 info1: did you know that ... did you? + rootdir: $REGENDOC_TMPDIR, inifile: collecting ... collected 0 items - - ============================= in 0.00 seconds ============================= + + ======= in 0.12 seconds ======== and nothing when run plainly:: $ py.test - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-162, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 0 items - - ============================= in 0.00 seconds ============================= + + ======= in 0.12 seconds ======== profiling test duration -------------------------- @@ -327,18 +327,18 @@ Now we can profile which test functions execute the slowest:: $ py.test --durations=3 - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-162, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 3 items - + test_some_are_slow.py ... - - ========================= slowest 3 test durations ========================= + + ======= slowest 3 test durations ======== 0.20s call test_some_are_slow.py::test_funcslow2 0.10s call test_some_are_slow.py::test_funcslow1 - 0.00s setup test_some_are_slow.py::test_funcslow2 - ========================= 3 passed in 0.31 seconds ========================= + 0.00s setup test_some_are_slow.py::test_funcfast + ======= 3 passed in 0.12 seconds ======== incremental testing - test steps --------------------------------------------------- @@ -389,27 +389,27 @@ If we run this:: $ py.test -rx - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-162, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 4 items - + test_step.py .Fx. - - ================================= FAILURES ================================= - ____________________ TestUserHandling.test_modification ____________________ - - self = - + + ======= FAILURES ======== + _______ TestUserHandling.test_modification ________ + + self = + def test_modification(self): > assert 0 E assert 0 - + test_step.py:9: AssertionError - ========================= short test summary info ========================== + ======= short test summary info ======== XFAIL test_step.py::TestUserHandling::()::test_deletion reason: previous test failed (test_modification) - ============== 1 failed, 2 passed, 1 xfailed in 0.02 seconds =============== + ======= 1 failed, 2 passed, 1 xfailed in 0.12 seconds ======== We'll see that ``test_deletion`` was not executed because ``test_modification`` failed. It is reported as an "expected failure". @@ -460,56 +460,56 @@ We can run this:: $ py.test - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-162, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 7 items - + test_step.py .Fx. a/test_db.py F a/test_db2.py F b/test_error.py E - - ================================== ERRORS ================================== - _______________________ ERROR at setup of test_root ________________________ - file /tmp/doc-exec-162/b/test_error.py, line 1 + + ======= ERRORS ======== + _______ ERROR at setup of test_root ________ + file $REGENDOC_TMPDIR/b/test_error.py, line 1 def test_root(db): # no db here, will error out fixture 'db' not found - available fixtures: pytestconfig, capsys, recwarn, monkeypatch, tmpdir, capfd + available fixtures: pytestconfig, recwarn, monkeypatch, capfd, capsys, tmpdir use 'py.test --fixtures [testpath]' for help on them. - - /tmp/doc-exec-162/b/test_error.py:1 - ================================= FAILURES ================================= - ____________________ TestUserHandling.test_modification ____________________ - - self = - + + $REGENDOC_TMPDIR/b/test_error.py:1 + ======= FAILURES ======== + _______ TestUserHandling.test_modification ________ + + self = + def test_modification(self): > assert 0 E assert 0 - + test_step.py:9: AssertionError - _________________________________ test_a1 __________________________________ - - db = - + _______ test_a1 ________ + + db = + def test_a1(db): > assert 0, db # to show value - E AssertionError: + E AssertionError: E assert 0 - + a/test_db.py:2: AssertionError - _________________________________ test_a2 __________________________________ - - db = - + _______ test_a2 ________ + + db = + def test_a2(db): > assert 0, db # to show value - E AssertionError: + E AssertionError: E assert 0 - + a/test_db2.py:2: AssertionError - ========== 3 failed, 2 passed, 1 xfailed, 1 error in 0.05 seconds ========== + ======= 3 failed, 2 passed, 1 xfailed, 1 error in 0.12 seconds ======== The two test modules in the ``a`` directory see the same ``db`` fixture instance while the one test in the sister-directory ``b`` doesn't see it. We could of course @@ -563,37 +563,36 @@ and run them:: $ py.test test_module.py - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-162, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 2 items - + test_module.py FF - - ================================= FAILURES ================================= - ________________________________ test_fail1 ________________________________ - - tmpdir = local('/tmp/pytest-22/test_fail10') - + + ======= FAILURES ======== + _______ test_fail1 ________ + + tmpdir = local('/tmp/pytest-NaN/test_fail10') + def test_fail1(tmpdir): > assert 0 E assert 0 - + test_module.py:2: AssertionError - ________________________________ test_fail2 ________________________________ - + _______ test_fail2 ________ + def test_fail2(): > assert 0 E assert 0 - + test_module.py:4: AssertionError - ========================= 2 failed in 0.02 seconds ========================= + ======= 2 failed in 0.12 seconds ======== you will have a "failures" file which contains the failing test ids:: $ cat failures - test_module.py::test_fail1 (/tmp/pytest-22/test_fail10) - test_module.py::test_fail2 + cat: failures: No such file or directory Making test result information available in fixtures ----------------------------------------------------------- @@ -654,42 +653,42 @@ and run it:: $ py.test -s test_module.py - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-162, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 3 items - - test_module.py Esetting up a test failed! test_module.py::test_setup_fails - Fexecuting test failed test_module.py::test_call_fails + + test_module.py E('setting up a test failed!', 'test_module.py::test_setup_fails') + F('executing test failed', 'test_module.py::test_call_fails') F - - ================================== ERRORS ================================== - ____________________ ERROR at setup of test_setup_fails ____________________ - + + ======= ERRORS ======== + _______ ERROR at setup of test_setup_fails ________ + @pytest.fixture def other(): > assert 0 E assert 0 - + test_module.py:6: AssertionError - ================================= FAILURES ================================= - _____________________________ test_call_fails ______________________________ - + ======= FAILURES ======== + _______ test_call_fails ________ + something = None - + def test_call_fails(something): > assert 0 E assert 0 - + test_module.py:12: AssertionError - ________________________________ test_fail2 ________________________________ - + _______ test_fail2 ________ + def test_fail2(): > assert 0 E assert 0 - + test_module.py:15: AssertionError - ==================== 2 failed, 1 error in 0.02 seconds ===================== + ======= 2 failed, 1 warnings, 1 error in 0.12 seconds ======== You'll see that the fixture finalizers could use the precise reporting information. @@ -744,4 +743,4 @@ application, using standard ``py.test`` command-line options:: $ ./app_main --pytest --verbose --tb=long --junit-xml=results.xml test-suite/ - /bin/sh: 1: ./app_main: not found + /bin/sh: ./app_main: No such file or directory diff -r 614bb548b53695323a83069fe6b4584f4a02f65d -r d6e31c7a2be21b51b813c024d5444d5a96e496ea doc/en/example/special.txt --- a/doc/en/example/special.txt +++ b/doc/en/example/special.txt @@ -69,4 +69,4 @@ .test other .test_unit1 method called . - 4 passed in 0.03 seconds + 4 passed in 0.12 seconds This diff is so big that we needed to truncate the remainder. Repository URL: https://bitbucket.org/pytest-dev/pytest/ -- This is a commit notification from bitbucket.org. You are receiving this because you have the service enabled, addressing the recipient of this email. From commits-noreply at bitbucket.org Thu Jun 11 00:39:25 2015 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Wed, 10 Jun 2015 22:39:25 -0000 Subject: [Pytest-commit] commit/pytest: 3 new changesets Message-ID: <20150610223925.30417.7752@app02.ash-private.bitbucket.org> 3 new commits in pytest: https://bitbucket.org/pytest-dev/pytest/commits/b912a9ae0fef/ Changeset: b912a9ae0fef Branch: issue-741-pytester-output User: flub Date: 2015-06-10 22:38:23+00:00 Summary: close branch Affected #: 0 files https://bitbucket.org/pytest-dev/pytest/commits/197c7423cfbc/ Changeset: 197c7423cfbc Branch: include-setup-teardown-duration-in-junitxml User: flub Date: 2015-06-10 22:38:40+00:00 Summary: close branch Affected #: 0 files https://bitbucket.org/pytest-dev/pytest/commits/9fa1a8f1bc24/ Changeset: 9fa1a8f1bc24 Branch: regendoc-upgrade User: flub Date: 2015-06-10 22:38:52+00:00 Summary: close branch Affected #: 0 files Repository URL: https://bitbucket.org/pytest-dev/pytest/ -- This is a commit notification from bitbucket.org. You are receiving this because you have the service enabled, addressing the recipient of this email. From issues-reply at bitbucket.org Fri Jun 12 10:46:42 2015 From: issues-reply at bitbucket.org (=?utf-8?q?Thomas_G=C3=BCttler?=) Date: Fri, 12 Jun 2015 08:46:42 -0000 Subject: [Pytest-commit] Issue #766: Use setuptools, not distutils (pytest-dev/pytest) Message-ID: <20150612084642.9869.33873@app09.ash-private.bitbucket.org> New issue 766: Use setuptools, not distutils https://bitbucket.org/pytest-dev/pytest/issue/766/use-setuptools-not-distutils Thomas G?ttler: It is very confusing for developers new to python if they read "you can use distutils or setuptools ...." https://pytest.org/latest/goodpractises.html#integrating-with-distutils-python-setup-py-test The future is setuptools. I think the best way for today: In the docs, just use setuptools. Don't even mention distutils, it confuses new users. And experts know what is (or was) going on. First step: Do you agree? If yes: next step: who is going to change the docs? From issues-reply at bitbucket.org Fri Jun 12 21:18:46 2015 From: issues-reply at bitbucket.org (Eric Siegerman) Date: Fri, 12 Jun 2015 19:18:46 -0000 Subject: [Pytest-commit] Issue #767: pytest.raises() doesn't always return Exception instance in py26 (pytest-dev/pytest) Message-ID: <20150612191846.3481.16225@app09.ash-private.bitbucket.org> New issue 767: pytest.raises() doesn't always return Exception instance in py26 https://bitbucket.org/pytest-dev/pytest/issue/767/pytestraises-doesnt-always-return Eric Siegerman: Typically, the ExceptionInfo.value returned by _pytest.raises()_ (when the exception is indeed raised) is an Exception instance. But under certain conditions, it isn't. It looks as though it contains the Exception's _.args_, rather than the Exception itself. This happens when: * Python is 2.6 (I've tested with 2.6.6 and 2.6.9), and * The exception is one of Python's internal ones (I've tested with ZeroDivisionError, KeyError, and IOError; not sure whether this applies to others) * The exception was raised by the interpreter -- if I explicitly `raise KeyError("msg")`, for example, the test passes The attached tests demonstrate the problem -- and the more useful behaviour when you use vanilla _try/except_ instead of _pytest.raises()_. Under Python 2.6, the three _pytest.raises()_ cases fail; under Python 2.7 and 3.4, they pass. (The _try/except_ tests pass under all three Python versions.) From commits-noreply at bitbucket.org Wed Jun 10 11:45:05 2015 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Wed, 10 Jun 2015 09:45:05 -0000 Subject: [Pytest-commit] commit/pytest: 3 new changesets Message-ID: <20150610094505.19986.21428@app13.ash-private.bitbucket.org> 3 new commits in pytest: https://bitbucket.org/pytest-dev/pytest/commits/d16d44694393/ Changeset: d16d44694393 User: RonnyPfannschmidt Date: 2015-06-06 12:15:23+00:00 Summary: make regendoc a direct requirement Affected #: 1 file diff -r 614bb548b53695323a83069fe6b4584f4a02f65d -r d16d44694393332a23621766f9d6e08f0c48a0b3 requirements-docs.txt --- a/requirements-docs.txt +++ b/requirements-docs.txt @@ -1,2 +1,2 @@ sphinx==1.2.3 -hg+ssh://hg at bitbucket.org/pytest-dev/regendoc#egg=regendoc +regendoc https://bitbucket.org/pytest-dev/pytest/commits/f4815f2e0d30/ Changeset: f4815f2e0d30 Branch: regendoc-upgrade User: RonnyPfannschmidt Date: 2015-06-06 21:30:49+00:00 Summary: use regendoc normalization and regenerate docs Affected #: 20 files diff -r d16d44694393332a23621766f9d6e08f0c48a0b3 -r f4815f2e0d3057ebfdf4fba33a0ee6ad3a8ac6ca Makefile --- a/Makefile +++ b/Makefile @@ -1,6 +1,11 @@ # Set of targets useful for development/release process PYTHON = python2.7 PATH := $(PWD)/.env/bin:$(PATH) +REGENDOC_ARGS := \ + --normalize "/={8,} (.*) ={8,}/======= \1 ========/" \ + --normalize "/_{8,} (.*) _{8,}/_______ \1 ________/" \ + --normalize "/in \d+.\d+ seconds/in 0.12 seconds/" \ + --normalize "@/tmp/pytest-\d+/@/tmp/pytest-NaN/@" # prepare virtual python environment .env: @@ -16,10 +21,11 @@ # generate documentation docs: develop - find doc/en -name '*.txt' -not -path 'doc/en/_build/*' | xargs .env/bin/regendoc + find doc/en -name '*.txt' -not -path 'doc/en/_build/*' | xargs .env/bin/regendoc ${REGENDOC_ARGS} cd doc/en; make html # upload documentation upload-docs: develop - find doc/en -name '*.txt' -not -path 'doc/en/_build/*' | xargs .env/bin/regendoc --update - cd doc/en; make install + find doc/en -name '*.txt' -not -path 'doc/en/_build/*' | xargs .env/bin/regendoc ${REGENDOC_ARGS} --update + #cd doc/en; make install + diff -r d16d44694393332a23621766f9d6e08f0c48a0b3 -r f4815f2e0d3057ebfdf4fba33a0ee6ad3a8ac6ca doc/en/assert.txt --- a/doc/en/assert.txt +++ b/doc/en/assert.txt @@ -25,15 +25,15 @@ you will see the return value of the function call:: $ py.test test_assert1.py - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-87, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 1 items test_assert1.py F - ================================= FAILURES ================================= - ______________________________ test_function _______________________________ + ======= FAILURES ======== + _______ test_function ________ def test_function(): > assert f() == 4 @@ -41,7 +41,7 @@ E + where 3 = f() test_assert1.py:5: AssertionError - ========================= 1 failed in 0.01 seconds ========================= + ======= 1 failed in 0.12 seconds ======== ``pytest`` has support for showing the values of the most common subexpressions including calls, attributes, comparisons, and binary and unary @@ -135,15 +135,15 @@ if you run this module:: $ py.test test_assert2.py - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-87, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 1 items test_assert2.py F - ================================= FAILURES ================================= - ___________________________ test_set_comparison ____________________________ + ======= FAILURES ======== + _______ test_set_comparison ________ def test_set_comparison(): set1 = set("1308") @@ -157,7 +157,7 @@ E Use -v to get the full diff test_assert2.py:5: AssertionError - ========================= 1 failed in 0.01 seconds ========================= + ======= 1 failed in 0.12 seconds ======== Special comparisons are done for a number of cases: @@ -202,8 +202,8 @@ $ py.test -q test_foocompare.py F - ================================= FAILURES ================================= - _______________________________ test_compare _______________________________ + ======= FAILURES ======== + _______ test_compare ________ def test_compare(): f1 = Foo(1) @@ -213,7 +213,7 @@ E vals: 1 != 2 test_foocompare.py:8: AssertionError - 1 failed in 0.01 seconds + 1 failed in 0.12 seconds .. _assert-details: .. _`assert introspection`: diff -r d16d44694393332a23621766f9d6e08f0c48a0b3 -r f4815f2e0d3057ebfdf4fba33a0ee6ad3a8ac6ca doc/en/builtin.txt --- a/doc/en/builtin.txt +++ b/doc/en/builtin.txt @@ -115,4 +115,4 @@ directory. The returned object is a `py.path.local`_ path object. - in 0.00 seconds + in 0.12 seconds diff -r d16d44694393332a23621766f9d6e08f0c48a0b3 -r f4815f2e0d3057ebfdf4fba33a0ee6ad3a8ac6ca doc/en/capture.txt --- a/doc/en/capture.txt +++ b/doc/en/capture.txt @@ -63,24 +63,24 @@ of the failing function and hide the other one:: $ py.test - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-90, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 2 items test_module.py .F - ================================= FAILURES ================================= - ________________________________ test_func2 ________________________________ + ======= FAILURES ======== + _______ test_func2 ________ def test_func2(): > assert False E assert False test_module.py:9: AssertionError - -------------------------- Captured stdout setup --------------------------- - setting up - ==================== 1 failed, 1 passed in 0.01 seconds ==================== + ---------------------------- Captured stdout setup ----------------------------- + setting up + ======= 1 failed, 1 passed in 0.12 seconds ======== Accessing captured output from a test function --------------------------------------------------- diff -r d16d44694393332a23621766f9d6e08f0c48a0b3 -r f4815f2e0d3057ebfdf4fba33a0ee6ad3a8ac6ca doc/en/doctest.txt --- a/doc/en/doctest.txt +++ b/doc/en/doctest.txt @@ -43,14 +43,14 @@ then you can just invoke ``py.test`` without command line options:: $ py.test - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-96, inifile: pytest.ini + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini collected 1 items mymodule.py . - ========================= 1 passed in 0.06 seconds ========================= + ======= 1 passed in 0.12 seconds ======== It is possible to use fixtures using the ``getfixture`` helper:: diff -r d16d44694393332a23621766f9d6e08f0c48a0b3 -r f4815f2e0d3057ebfdf4fba33a0ee6ad3a8ac6ca doc/en/example/markers.txt --- a/doc/en/example/markers.txt +++ b/doc/en/example/markers.txt @@ -30,30 +30,30 @@ You can then restrict a test run to only run tests marked with ``webtest``:: $ py.test -v -m webtest - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 -- /tmp/sandbox/pytest/.tox/regen/bin/python3.4 - rootdir: /tmp/doc-exec-157, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 -- $PWD/.env/bin/python2.7 + rootdir: $REGENDOC_TMPDIR, inifile: collecting ... collected 4 items test_server.py::test_send_http PASSED - =================== 3 tests deselected by "-m 'webtest'" =================== - ================== 1 passed, 3 deselected in 0.01 seconds ================== + ======= 3 tests deselected by "-m 'webtest'" ======== + ======= 1 passed, 3 deselected in 0.12 seconds ======== Or the inverse, running all tests except the webtest ones:: $ py.test -v -m "not webtest" - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 -- /tmp/sandbox/pytest/.tox/regen/bin/python3.4 - rootdir: /tmp/doc-exec-157, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 -- $PWD/.env/bin/python2.7 + rootdir: $REGENDOC_TMPDIR, inifile: collecting ... collected 4 items test_server.py::test_something_quick PASSED test_server.py::test_another PASSED test_server.py::TestClass::test_method PASSED - ================= 1 tests deselected by "-m 'not webtest'" ================= - ================== 3 passed, 1 deselected in 0.01 seconds ================== + ======= 1 tests deselected by "-m 'not webtest'" ======== + ======= 3 passed, 1 deselected in 0.12 seconds ======== Selecting tests based on their node ID -------------------------------------- @@ -63,39 +63,39 @@ tests based on their module, class, method, or function name:: $ py.test -v test_server.py::TestClass::test_method - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 -- /tmp/sandbox/pytest/.tox/regen/bin/python3.4 - rootdir: /tmp/doc-exec-157, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 -- $PWD/.env/bin/python2.7 + rootdir: $REGENDOC_TMPDIR, inifile: collecting ... collected 5 items test_server.py::TestClass::test_method PASSED - ========================= 1 passed in 0.01 seconds ========================= + ======= 1 passed in 0.12 seconds ======== You can also select on the class:: $ py.test -v test_server.py::TestClass - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 -- /tmp/sandbox/pytest/.tox/regen/bin/python3.4 - rootdir: /tmp/doc-exec-157, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 -- $PWD/.env/bin/python2.7 + rootdir: $REGENDOC_TMPDIR, inifile: collecting ... collected 4 items test_server.py::TestClass::test_method PASSED - ========================= 1 passed in 0.01 seconds ========================= + ======= 1 passed in 0.12 seconds ======== Or select multiple nodes:: $ py.test -v test_server.py::TestClass test_server.py::test_send_http - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 -- /tmp/sandbox/pytest/.tox/regen/bin/python3.4 - rootdir: /tmp/doc-exec-157, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 -- $PWD/.env/bin/python2.7 + rootdir: $REGENDOC_TMPDIR, inifile: collecting ... collected 8 items test_server.py::TestClass::test_method PASSED test_server.py::test_send_http PASSED - ========================= 2 passed in 0.01 seconds ========================= + ======= 2 passed in 0.12 seconds ======== .. _node-id: @@ -124,44 +124,44 @@ select tests based on their names:: $ py.test -v -k http # running with the above defined example module - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 -- /tmp/sandbox/pytest/.tox/regen/bin/python3.4 - rootdir: /tmp/doc-exec-157, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 -- $PWD/.env/bin/python2.7 + rootdir: $REGENDOC_TMPDIR, inifile: collecting ... collected 4 items test_server.py::test_send_http PASSED - ====================== 3 tests deselected by '-khttp' ====================== - ================== 1 passed, 3 deselected in 0.01 seconds ================== + ======= 3 tests deselected by '-khttp' ======== + ======= 1 passed, 3 deselected in 0.12 seconds ======== And you can also run all tests except the ones that match the keyword:: $ py.test -k "not send_http" -v - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 -- /tmp/sandbox/pytest/.tox/regen/bin/python3.4 - rootdir: /tmp/doc-exec-157, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 -- $PWD/.env/bin/python2.7 + rootdir: $REGENDOC_TMPDIR, inifile: collecting ... collected 4 items test_server.py::test_something_quick PASSED test_server.py::test_another PASSED test_server.py::TestClass::test_method PASSED - ================= 1 tests deselected by '-knot send_http' ================== - ================== 3 passed, 1 deselected in 0.01 seconds ================== + ======= 1 tests deselected by '-knot send_http' ======== + ======= 3 passed, 1 deselected in 0.12 seconds ======== Or to select "http" and "quick" tests:: $ py.test -k "http or quick" -v - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 -- /tmp/sandbox/pytest/.tox/regen/bin/python3.4 - rootdir: /tmp/doc-exec-157, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 -- $PWD/.env/bin/python2.7 + rootdir: $REGENDOC_TMPDIR, inifile: collecting ... collected 4 items test_server.py::test_send_http PASSED test_server.py::test_something_quick PASSED - ================= 2 tests deselected by '-khttp or quick' ================== - ================== 2 passed, 2 deselected in 0.01 seconds ================== + ======= 2 tests deselected by '-khttp or quick' ======== + ======= 2 passed, 2 deselected in 0.12 seconds ======== .. note:: @@ -201,9 +201,9 @@ @pytest.mark.usefixtures(fixturename1, fixturename2, ...): mark tests as needing all of the specified fixtures. see http://pytest.org/latest/fixture.html#usefixtures - @pytest.hookimpl(tryfirst=True): mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible. + @pytest.mark.tryfirst: mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible. - @pytest.hookimpl(trylast=True): mark a hook implementation function such that the plugin machinery will try to call it last/as late as possible. + @pytest.mark.trylast: mark a hook implementation function such that the plugin machinery will try to call it last/as late as possible. For an example on how to add and work with markers from a plugin, see @@ -341,26 +341,26 @@ the test needs:: $ py.test -E stage2 - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-157, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 1 items test_someenv.py s - ======================== 1 skipped in 0.01 seconds ========================= + ======= 1 skipped in 0.12 seconds ======== and here is one that specifies exactly the environment needed:: $ py.test -E stage1 - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-157, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 1 items test_someenv.py . - ========================= 1 passed in 0.01 seconds ========================= + ======= 1 passed in 0.12 seconds ======== The ``--markers`` option always gives you a list of available markers:: @@ -375,9 +375,9 @@ @pytest.mark.usefixtures(fixturename1, fixturename2, ...): mark tests as needing all of the specified fixtures. see http://pytest.org/latest/fixture.html#usefixtures - @pytest.hookimpl(tryfirst=True): mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible. + @pytest.mark.tryfirst: mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible. - @pytest.hookimpl(trylast=True): mark a hook implementation function such that the plugin machinery will try to call it last/as late as possible. + @pytest.mark.trylast: mark a hook implementation function such that the plugin machinery will try to call it last/as late as possible. Reading markers which were set from multiple places @@ -420,7 +420,7 @@ glob args=('class',) kwargs={'x': 2} glob args=('module',) kwargs={'x': 1} . - 1 passed in 0.01 seconds + 1 passed in 0.12 seconds marking platform specific tests with pytest -------------------------------------------------------------- @@ -472,29 +472,29 @@ then you will see two test skipped and two executed tests as expected:: $ py.test -rs # this option reports skip reasons - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-157, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 4 items - test_plat.py sss. - ========================= short test summary info ========================== - SKIP [3] /tmp/doc-exec-157/conftest.py:12: cannot run on platform linux + test_plat.py s.s. + ======= short test summary info ======== + SKIP [2] $REGENDOC_TMPDIR/conftest.py:12: cannot run on platform linux2 - =================== 1 passed, 3 skipped in 0.01 seconds ==================== + ======= 2 passed, 2 skipped in 0.12 seconds ======== Note that if you specify a platform via the marker-command line option like this:: $ py.test -m linux2 - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-157, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 4 items - test_plat.py s + test_plat.py . - =================== 3 tests deselected by "-m 'linux2'" ==================== - ================= 1 skipped, 3 deselected in 0.01 seconds ================== + ======= 3 tests deselected by "-m 'linux2'" ======== + ======= 1 passed, 3 deselected in 0.12 seconds ======== then the unmarked-tests will not be run. It is thus a way to restrict the run to the specific tests. @@ -538,47 +538,47 @@ We can now use the ``-m option`` to select one set:: $ py.test -m interface --tb=short - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-157, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 4 items test_module.py FF - ================================= FAILURES ================================= - __________________________ test_interface_simple ___________________________ + ======= FAILURES ======== + _______ test_interface_simple ________ test_module.py:3: in test_interface_simple assert 0 E assert 0 - __________________________ test_interface_complex __________________________ + _______ test_interface_complex ________ test_module.py:6: in test_interface_complex assert 0 E assert 0 - ================== 2 tests deselected by "-m 'interface'" ================== - ================== 2 failed, 2 deselected in 0.02 seconds ================== + ======= 2 tests deselected by "-m 'interface'" ======== + ======= 2 failed, 2 deselected in 0.12 seconds ======== or to select both "event" and "interface" tests:: $ py.test -m "interface or event" --tb=short - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-157, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 4 items test_module.py FFF - ================================= FAILURES ================================= - __________________________ test_interface_simple ___________________________ + ======= FAILURES ======== + _______ test_interface_simple ________ test_module.py:3: in test_interface_simple assert 0 E assert 0 - __________________________ test_interface_complex __________________________ + _______ test_interface_complex ________ test_module.py:6: in test_interface_complex assert 0 E assert 0 - ____________________________ test_event_simple _____________________________ + _______ test_event_simple ________ test_module.py:9: in test_event_simple assert 0 E assert 0 - ============= 1 tests deselected by "-m 'interface or event'" ============== - ================== 3 failed, 1 deselected in 0.02 seconds ================== + ======= 1 tests deselected by "-m 'interface or event'" ======== + ======= 3 failed, 1 deselected in 0.12 seconds ======== diff -r d16d44694393332a23621766f9d6e08f0c48a0b3 -r f4815f2e0d3057ebfdf4fba33a0ee6ad3a8ac6ca doc/en/example/nonpython.txt --- a/doc/en/example/nonpython.txt +++ b/doc/en/example/nonpython.txt @@ -26,19 +26,19 @@ now execute the test specification:: nonpython $ py.test test_simple.yml - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/sandbox/pytest/doc/en, inifile: pytest.ini + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $PWD/doc/en, inifile: pytest.ini collected 2 items test_simple.yml .F - ================================= FAILURES ================================= - ______________________________ usecase: hello ______________________________ + ======= FAILURES ======== + _______ usecase: hello ________ usecase execution failed spec failed: 'some': 'other' no further details known at this point. - ==================== 1 failed, 1 passed in 0.19 seconds ==================== + ======= 1 failed, 1 passed in 0.12 seconds ======== You get one dot for the passing ``sub1: sub1`` check and one failure. Obviously in the above ``conftest.py`` you'll want to implement a more @@ -56,31 +56,31 @@ consulted when reporting in ``verbose`` mode:: nonpython $ py.test -v - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 -- /tmp/sandbox/pytest/.tox/regen/bin/python3.4 - rootdir: /tmp/sandbox/pytest/doc/en, inifile: pytest.ini + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 -- $PWD/.env/bin/python2.7 + rootdir: $PWD/doc/en, inifile: pytest.ini collecting ... collected 2 items test_simple.yml::ok PASSED test_simple.yml::hello FAILED - ================================= FAILURES ================================= - ______________________________ usecase: hello ______________________________ + ======= FAILURES ======== + _______ usecase: hello ________ usecase execution failed spec failed: 'some': 'other' no further details known at this point. - ==================== 1 failed, 1 passed in 0.05 seconds ==================== + ======= 1 failed, 1 passed in 0.12 seconds ======== While developing your custom test collection and execution it's also interesting to just look at the collection tree:: nonpython $ py.test --collect-only - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/sandbox/pytest/doc/en, inifile: pytest.ini + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $PWD/doc/en, inifile: pytest.ini collected 2 items - ============================= in 0.04 seconds ============================= + ======= in 0.12 seconds ======== diff -r d16d44694393332a23621766f9d6e08f0c48a0b3 -r f4815f2e0d3057ebfdf4fba33a0ee6ad3a8ac6ca doc/en/example/parametrize.txt --- a/doc/en/example/parametrize.txt +++ b/doc/en/example/parametrize.txt @@ -46,15 +46,15 @@ $ py.test -q test_compute.py .. - 2 passed in 0.01 seconds + 2 passed in 0.12 seconds We run only two computations, so we see two dots. let's run the full monty:: $ py.test -q --all ....F - ================================= FAILURES ================================= - _____________________________ test_compute[4] ______________________________ + ======= FAILURES ======== + _______ test_compute[4] ________ param1 = 4 @@ -63,7 +63,7 @@ E assert 4 < 4 test_compute.py:3: AssertionError - 1 failed, 4 passed in 0.02 seconds + 1 failed, 4 passed in 0.12 seconds As expected when running the full range of ``param1`` values we'll get an error on the last one. @@ -126,11 +126,11 @@ $ py.test test_time.py --collect-only - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-159, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: - ============================= in 0.00 seconds ============================= + ======= in 0.12 seconds ======== ERROR: file not found: test_time.py A quick port of "testscenarios" @@ -170,22 +170,22 @@ this is a fully self-contained example which you can run with:: $ py.test test_scenarios.py - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-159, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 4 items test_scenarios.py .... - ========================= 4 passed in 0.02 seconds ========================= + ======= 4 passed in 0.12 seconds ======== If you just collect tests you'll also nicely see 'advanced' and 'basic' as variants for the test function:: $ py.test --collect-only test_scenarios.py - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-159, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 4 items @@ -195,7 +195,7 @@ - ============================= in 0.01 seconds ============================= + ======= in 0.12 seconds ======== Note that we told ``metafunc.parametrize()`` that your scenario values should be considered class-scoped. With pytest-2.3 this leads to a @@ -248,24 +248,24 @@ Let's first see how it looks like at collection time:: $ py.test test_backends.py --collect-only - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-159, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 2 items - ============================= in 0.01 seconds ============================= + ======= in 0.12 seconds ======== And then when we run the test:: $ py.test -q test_backends.py .F - ================================= FAILURES ================================= - _________________________ test_db_initialized[d2] __________________________ + ======= FAILURES ======== + _______ test_db_initialized[d2] ________ - db = + db = def test_db_initialized(db): # a dummy test @@ -274,7 +274,7 @@ E Failed: deliberately failing for demo purposes test_backends.py:6: Failed - 1 failed, 1 passed in 0.01 seconds + 1 failed, 1 passed in 0.12 seconds The first invocation with ``db == "DB1"`` passed while the second with ``db == "DB2"`` failed. Our ``db`` fixture function has instantiated each of the DB values during the setup phase while the ``pytest_generate_tests`` generated two according calls to the ``test_db_initialized`` during the collection phase. @@ -318,17 +318,17 @@ $ py.test -q F.. - ================================= FAILURES ================================= - ________________________ TestClass.test_equals[2-1] ________________________ + ======= FAILURES ======== + _______ TestClass.test_equals[1-2] ________ - self = , a = 1, b = 2 + self = , a = 1, b = 2 def test_equals(self, a, b): > assert a == b E assert 1 == 2 test_parametrize.py:18: AssertionError - 1 failed, 2 passed in 0.02 seconds + 1 failed, 2 passed in 0.12 seconds Indirect parametrization with multiple fixtures -------------------------------------------------------------- @@ -347,8 +347,11 @@ Running it results in some skips if we don't have all the python interpreters installed and otherwise runs all combinations (5 interpreters times 5 interpreters times 3 objects to serialize/deserialize):: . $ py.test -rs -q multipython.py - ........................... - 27 passed in 4.14 seconds + ssssssssssss...ssssssssssss + ======= short test summary info ======== + SKIP [12] $PWD/doc/en/example/multipython.py:22: 'python3.3' not found + SKIP [12] $PWD/doc/en/example/multipython.py:22: 'python2.6' not found + 3 passed, 24 skipped in 0.12 seconds Indirect parametrization of optional implementations/imports -------------------------------------------------------------------- @@ -394,16 +397,16 @@ If you run this with reporting for skips enabled:: $ py.test -rs test_module.py - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-159, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 2 items test_module.py .s - ========================= short test summary info ========================== - SKIP [1] /tmp/doc-exec-159/conftest.py:10: could not import 'opt2' + ======= short test summary info ======== + SKIP [1] $REGENDOC_TMPDIR/conftest.py:10: could not import 'opt2' - =================== 1 passed, 1 skipped in 0.01 seconds ==================== + ======= 1 passed, 1 skipped in 0.12 seconds ======== You'll see that we don't have a ``opt2`` module and thus the second test run of our ``test_func1`` was skipped. A few notes: diff -r d16d44694393332a23621766f9d6e08f0c48a0b3 -r f4815f2e0d3057ebfdf4fba33a0ee6ad3a8ac6ca doc/en/example/pythoncollection.txt --- a/doc/en/example/pythoncollection.txt +++ b/doc/en/example/pythoncollection.txt @@ -42,9 +42,9 @@ then the test collection looks like this:: $ py.test --collect-only - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-160, inifile: setup.cfg + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: setup.cfg collected 2 items @@ -52,7 +52,7 @@ - ============================= in 0.01 seconds ============================= + ======= in 0.12 seconds ======== .. note:: @@ -88,9 +88,9 @@ You can always peek at the collection tree without running tests like this:: . $ py.test --collect-only pythoncollection.py - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/sandbox/pytest/doc/en, inifile: pytest.ini + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $PWD/doc/en, inifile: pytest.ini collected 3 items @@ -99,7 +99,7 @@ - ============================= in 0.01 seconds ============================= + ======= in 0.12 seconds ======== customizing test collection to find all .py files --------------------------------------------------------- @@ -142,12 +142,14 @@ interpreters and will leave out the setup.py file:: $ py.test --collect-only - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-160, inifile: pytest.ini - collected 0 items + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini + collected 1 items + + - ============================= in 0.01 seconds ============================= + ======= in 0.12 seconds ======== If you run with a Python3 interpreter the moduled added through the conftest.py file will not be considered for test collection. diff -r d16d44694393332a23621766f9d6e08f0c48a0b3 -r f4815f2e0d3057ebfdf4fba33a0ee6ad3a8ac6ca doc/en/example/reportingdemo.txt --- a/doc/en/example/reportingdemo.txt +++ b/doc/en/example/reportingdemo.txt @@ -12,15 +12,15 @@ .. code-block:: python assertion $ py.test failure_demo.py - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/sandbox/pytest/doc/en, inifile: pytest.ini + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $PWD/doc/en, inifile: pytest.ini collected 42 items failure_demo.py FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF - ================================= FAILURES ================================= - ____________________________ test_generative[0] ____________________________ + ======= FAILURES ======== + _______ test_generative[0] ________ param1 = 3, param2 = 6 @@ -29,9 +29,9 @@ E assert (3 * 2) < 6 failure_demo.py:15: AssertionError - _________________________ TestFailing.test_simple __________________________ + _______ TestFailing.test_simple ________ - self = + self = def test_simple(self): def f(): @@ -41,13 +41,13 @@ > assert f() == g() E assert 42 == 43 - E + where 42 = .f at 0x7f65f2315510>() - E + and 43 = .g at 0x7f65f2323510>() + E + where 42 = () + E + and 43 = () failure_demo.py:28: AssertionError - ____________________ TestFailing.test_simple_multiline _____________________ + _______ TestFailing.test_simple_multiline ________ - self = + self = def test_simple_multiline(self): otherfunc_multi( @@ -55,7 +55,7 @@ > 6*9) failure_demo.py:33: - _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ + _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ a = 42, b = 54 @@ -65,21 +65,21 @@ E assert 42 == 54 failure_demo.py:11: AssertionError - ___________________________ TestFailing.test_not ___________________________ + _______ TestFailing.test_not ________ - self = + self = def test_not(self): def f(): return 42 > assert not f() E assert not 42 - E + where 42 = .f at 0x7f65f2323598>() + E + where 42 = () failure_demo.py:38: AssertionError - _________________ TestSpecialisedExplanations.test_eq_text _________________ + _______ TestSpecialisedExplanations.test_eq_text ________ - self = + self = def test_eq_text(self): > assert 'spam' == 'eggs' @@ -88,9 +88,9 @@ E + eggs failure_demo.py:42: AssertionError - _____________ TestSpecialisedExplanations.test_eq_similar_text _____________ + _______ TestSpecialisedExplanations.test_eq_similar_text ________ - self = + self = def test_eq_similar_text(self): > assert 'foo 1 bar' == 'foo 2 bar' @@ -101,9 +101,9 @@ E ? ^ failure_demo.py:45: AssertionError - ____________ TestSpecialisedExplanations.test_eq_multiline_text ____________ + _______ TestSpecialisedExplanations.test_eq_multiline_text ________ - self = + self = def test_eq_multiline_text(self): > assert 'foo\nspam\nbar' == 'foo\neggs\nbar' @@ -114,9 +114,9 @@ E bar failure_demo.py:48: AssertionError - ______________ TestSpecialisedExplanations.test_eq_long_text _______________ + _______ TestSpecialisedExplanations.test_eq_long_text ________ - self = + self = def test_eq_long_text(self): a = '1'*100 + 'a' + '2'*100 @@ -131,9 +131,9 @@ E ? ^ failure_demo.py:53: AssertionError - _________ TestSpecialisedExplanations.test_eq_long_text_multiline __________ + _______ TestSpecialisedExplanations.test_eq_long_text_multiline ________ - self = + self = def test_eq_long_text_multiline(self): a = '1\n'*100 + 'a' + '2\n'*100 @@ -155,9 +155,9 @@ E 2 failure_demo.py:58: AssertionError - _________________ TestSpecialisedExplanations.test_eq_list _________________ + _______ TestSpecialisedExplanations.test_eq_list ________ - self = + self = def test_eq_list(self): > assert [0, 1, 2] == [0, 1, 3] @@ -166,9 +166,9 @@ E Use -v to get the full diff failure_demo.py:61: AssertionError - ______________ TestSpecialisedExplanations.test_eq_list_long _______________ + _______ TestSpecialisedExplanations.test_eq_list_long ________ - self = + self = def test_eq_list_long(self): a = [0]*100 + [1] + [3]*100 @@ -179,9 +179,9 @@ E Use -v to get the full diff failure_demo.py:66: AssertionError - _________________ TestSpecialisedExplanations.test_eq_dict _________________ + _______ TestSpecialisedExplanations.test_eq_dict ________ - self = + self = def test_eq_dict(self): > assert {'a': 0, 'b': 1, 'c': 0} == {'a': 0, 'b': 2, 'd': 0} @@ -196,9 +196,9 @@ E Use -v to get the full diff failure_demo.py:69: AssertionError - _________________ TestSpecialisedExplanations.test_eq_set __________________ + _______ TestSpecialisedExplanations.test_eq_set ________ - self = + self = def test_eq_set(self): > assert set([0, 10, 11, 12]) == set([0, 20, 21]) @@ -213,9 +213,9 @@ E Use -v to get the full diff failure_demo.py:72: AssertionError - _____________ TestSpecialisedExplanations.test_eq_longer_list ______________ + _______ TestSpecialisedExplanations.test_eq_longer_list ________ - self = + self = def test_eq_longer_list(self): > assert [1,2] == [1,2,3] @@ -224,18 +224,18 @@ E Use -v to get the full diff failure_demo.py:75: AssertionError - _________________ TestSpecialisedExplanations.test_in_list _________________ + _______ TestSpecialisedExplanations.test_in_list ________ - self = + self = def test_in_list(self): > assert 1 in [0, 2, 3, 4, 5] E assert 1 in [0, 2, 3, 4, 5] failure_demo.py:78: AssertionError - __________ TestSpecialisedExplanations.test_not_in_text_multiline __________ + _______ TestSpecialisedExplanations.test_not_in_text_multiline ________ - self = + self = def test_not_in_text_multiline(self): text = 'some multiline\ntext\nwhich\nincludes foo\nand a\ntail' @@ -251,9 +251,9 @@ E tail failure_demo.py:82: AssertionError - ___________ TestSpecialisedExplanations.test_not_in_text_single ____________ + _______ TestSpecialisedExplanations.test_not_in_text_single ________ - self = + self = def test_not_in_text_single(self): text = 'single foo line' @@ -264,9 +264,9 @@ E ? +++ failure_demo.py:86: AssertionError - _________ TestSpecialisedExplanations.test_not_in_text_single_long _________ + _______ TestSpecialisedExplanations.test_not_in_text_single_long ________ - self = + self = def test_not_in_text_single_long(self): text = 'head ' * 50 + 'foo ' + 'tail ' * 20 @@ -277,9 +277,9 @@ E ? +++ failure_demo.py:90: AssertionError - ______ TestSpecialisedExplanations.test_not_in_text_single_long_term _______ + _______ TestSpecialisedExplanations.test_not_in_text_single_long_term ________ - self = + self = def test_not_in_text_single_long_term(self): text = 'head ' * 50 + 'f'*70 + 'tail ' * 20 @@ -290,7 +290,7 @@ E ? ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ failure_demo.py:94: AssertionError - ______________________________ test_attribute ______________________________ + _______ test_attribute ________ def test_attribute(): class Foo(object): @@ -298,21 +298,21 @@ i = Foo() > assert i.b == 2 E assert 1 == 2 - E + where 1 = .Foo object at 0x7f65f1c814e0>.b + E + where 1 = .b failure_demo.py:101: AssertionError - _________________________ test_attribute_instance __________________________ + _______ test_attribute_instance ________ def test_attribute_instance(): class Foo(object): b = 1 > assert Foo().b == 2 E assert 1 == 2 - E + where 1 = .Foo object at 0x7f65f1c7f7f0>.b - E + where .Foo object at 0x7f65f1c7f7f0> = .Foo'>() + E + where 1 = .b + E + where = () failure_demo.py:107: AssertionError - __________________________ test_attribute_failure __________________________ + _______ test_attribute_failure ________ def test_attribute_failure(): class Foo(object): @@ -323,16 +323,16 @@ > assert i.b == 2 failure_demo.py:116: - _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ + _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ - self = .Foo object at 0x7f65f1c97dd8> + self = def _get_b(self): > raise Exception('Failed to get attrib') E Exception: Failed to get attrib failure_demo.py:113: Exception - _________________________ test_attribute_multiple __________________________ + _______ test_attribute_multiple ________ def test_attribute_multiple(): class Foo(object): @@ -341,57 +341,57 @@ b = 2 > assert Foo().b == Bar().b E assert 1 == 2 - E + where 1 = .Foo object at 0x7f65f1c9b630>.b - E + where .Foo object at 0x7f65f1c9b630> = .Foo'>() - E + and 2 = .Bar object at 0x7f65f1c9b2b0>.b - E + where .Bar object at 0x7f65f1c9b2b0> = .Bar'>() + E + where 1 = .b + E + where = () + E + and 2 = .b + E + where = () failure_demo.py:124: AssertionError - __________________________ TestRaises.test_raises __________________________ + _______ TestRaises.test_raises ________ - self = + self = def test_raises(self): s = 'qwe' > raises(TypeError, "int(s)") failure_demo.py:133: - _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ + _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > int(s) E ValueError: invalid literal for int() with base 10: 'qwe' - <0-codegen /tmp/sandbox/pytest/.tox/regen/lib/python3.4/site-packages/_pytest/python.py:1075>:1: ValueError - ______________________ TestRaises.test_raises_doesnt _______________________ + <0-codegen $PWD/_pytest/python.py:1091>:1: ValueError + _______ TestRaises.test_raises_doesnt ________ - self = + self = def test_raises_doesnt(self): > raises(IOError, "int('3')") E Failed: DID NOT RAISE failure_demo.py:136: Failed - __________________________ TestRaises.test_raise ___________________________ + _______ TestRaises.test_raise ________ - self = + self = def test_raise(self): > raise ValueError("demo error") E ValueError: demo error failure_demo.py:139: ValueError - ________________________ TestRaises.test_tupleerror ________________________ + _______ TestRaises.test_tupleerror ________ - self = + self = def test_tupleerror(self): > a,b = [1] E ValueError: need more than 1 value to unpack failure_demo.py:142: ValueError - ______ TestRaises.test_reinterpret_fails_with_print_for_the_fun_of_it ______ + _______ TestRaises.test_reinterpret_fails_with_print_for_the_fun_of_it ________ - self = + self = def test_reinterpret_fails_with_print_for_the_fun_of_it(self): l = [1,2,3] @@ -400,18 +400,18 @@ E TypeError: 'int' object is not iterable failure_demo.py:147: TypeError - --------------------------- Captured stdout call --------------------------- + ----------------------------- Captured stdout call ----------------------------- l is [1, 2, 3] - ________________________ TestRaises.test_some_error ________________________ + _______ TestRaises.test_some_error ________ - self = + self = def test_some_error(self): > if namenotexi: - E NameError: name 'namenotexi' is not defined + E NameError: global name 'namenotexi' is not defined failure_demo.py:150: NameError - ____________________ test_dynamic_compile_shows_nicely _____________________ + _______ test_dynamic_compile_shows_nicely ________ def test_dynamic_compile_shows_nicely(): src = 'def foo():\n assert 1 == 0\n' @@ -423,16 +423,16 @@ > module.foo() failure_demo.py:165: - _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ + _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ def foo(): > assert 1 == 0 E assert 1 == 0 - <2-codegen 'abc-123' /tmp/sandbox/pytest/doc/en/example/assertion/failure_demo.py:162>:2: AssertionError - ____________________ TestMoreErrors.test_complex_error _____________________ + <2-codegen 'abc-123' $PWD/doc/en/example/assertion/failure_demo.py:162>:2: AssertionError + _______ TestMoreErrors.test_complex_error ________ - self = + self = def test_complex_error(self): def f(): @@ -442,10 +442,10 @@ > somefunc(f(), g()) failure_demo.py:175: - _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ + _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ failure_demo.py:8: in somefunc otherfunc(x,y) - _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ + _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ a = 44, b = 43 @@ -454,9 +454,9 @@ E assert 44 == 43 failure_demo.py:5: AssertionError - ___________________ TestMoreErrors.test_z1_unpack_error ____________________ + _______ TestMoreErrors.test_z1_unpack_error ________ - self = + self = def test_z1_unpack_error(self): l = [] @@ -464,9 +464,9 @@ E ValueError: need more than 0 values to unpack failure_demo.py:179: ValueError - ____________________ TestMoreErrors.test_z2_type_error _____________________ + _______ TestMoreErrors.test_z2_type_error ________ - self = + self = def test_z2_type_error(self): l = 3 @@ -474,21 +474,21 @@ E TypeError: 'int' object is not iterable failure_demo.py:183: TypeError - ______________________ TestMoreErrors.test_startswith ______________________ + _______ TestMoreErrors.test_startswith ________ - self = + self = def test_startswith(self): s = "123" g = "456" > assert s.startswith(g) - E assert ('456') - E + where = '123'.startswith + E assert ('456') + E + where = '123'.startswith failure_demo.py:188: AssertionError - __________________ TestMoreErrors.test_startswith_nested ___________________ + _______ TestMoreErrors.test_startswith_nested ________ - self = + self = def test_startswith_nested(self): def f(): @@ -496,15 +496,15 @@ def g(): return "456" > assert f().startswith(g()) - E assert ('456') - E + where = '123'.startswith - E + where '123' = .f at 0x7f65f1c32950>() - E + and '456' = .g at 0x7f65f1c32ea0>() + E assert ('456') + E + where = '123'.startswith + E + where '123' = () + E + and '456' = () failure_demo.py:195: AssertionError - _____________________ TestMoreErrors.test_global_func ______________________ + _______ TestMoreErrors.test_global_func ________ - self = + self = def test_global_func(self): > assert isinstance(globf(42), float) @@ -512,20 +512,20 @@ E + where 43 = globf(42) failure_demo.py:198: AssertionError - _______________________ TestMoreErrors.test_instance _______________________ + _______ TestMoreErrors.test_instance ________ - self = + self = def test_instance(self): self.x = 6*7 > assert self.x != 42 E assert 42 != 42 - E + where 42 = .x + E + where 42 = .x failure_demo.py:202: AssertionError - _______________________ TestMoreErrors.test_compare ________________________ + _______ TestMoreErrors.test_compare ________ - self = + self = def test_compare(self): > assert globf(10) < 5 @@ -533,9 +533,9 @@ E + where 11 = globf(10) failure_demo.py:205: AssertionError - _____________________ TestMoreErrors.test_try_finally ______________________ + _______ TestMoreErrors.test_try_finally ________ - self = + self = def test_try_finally(self): x = 1 @@ -544,9 +544,9 @@ E assert 1 == 0 failure_demo.py:210: AssertionError - ___________________ TestCustomAssertMsg.test_single_line ___________________ + _______ TestCustomAssertMsg.test_single_line ________ - self = + self = def test_single_line(self): class A: @@ -555,12 +555,12 @@ > assert A.a == b, "A.a appears not to be b" E AssertionError: A.a appears not to be b E assert 1 == 2 - E + where 1 = .A'>.a + E + where 1 = .a failure_demo.py:221: AssertionError - ____________________ TestCustomAssertMsg.test_multiline ____________________ + _______ TestCustomAssertMsg.test_multiline ________ - self = + self = def test_multiline(self): class A: @@ -572,12 +572,12 @@ E or does not appear to be b E one of those E assert 1 == 2 - E + where 1 = .A'>.a + E + where 1 = .a failure_demo.py:227: AssertionError - ___________________ TestCustomAssertMsg.test_custom_repr ___________________ + _______ TestCustomAssertMsg.test_custom_repr ________ - self = + self = def test_custom_repr(self): class JSON: @@ -595,4 +595,4 @@ E + where 1 = This is JSON\n{\n 'foo': 'bar'\n}.a failure_demo.py:237: AssertionError - ======================== 42 failed in 0.35 seconds ========================= + ======= 42 failed in 0.12 seconds ======== diff -r d16d44694393332a23621766f9d6e08f0c48a0b3 -r f4815f2e0d3057ebfdf4fba33a0ee6ad3a8ac6ca doc/en/example/simple.txt --- a/doc/en/example/simple.txt +++ b/doc/en/example/simple.txt @@ -39,11 +39,11 @@ $ py.test -q test_sample.py F - ================================= FAILURES ================================= - _______________________________ test_answer ________________________________ - + ======= FAILURES ======== + _______ test_answer ________ + cmdopt = 'type1' - + def test_answer(cmdopt): if cmdopt == "type1": print ("first") @@ -51,21 +51,21 @@ print ("second") > assert 0 # to see what was printed E assert 0 - + test_sample.py:6: AssertionError - --------------------------- Captured stdout call --------------------------- + ----------------------------- Captured stdout call ----------------------------- first - 1 failed in 0.01 seconds + 1 failed in 0.12 seconds And now with supplying a command line option:: $ py.test -q --cmdopt=type2 F - ================================= FAILURES ================================= - _______________________________ test_answer ________________________________ - + ======= FAILURES ======== + _______ test_answer ________ + cmdopt = 'type2' - + def test_answer(cmdopt): if cmdopt == "type1": print ("first") @@ -73,11 +73,11 @@ print ("second") > assert 0 # to see what was printed E assert 0 - + test_sample.py:6: AssertionError - --------------------------- Captured stdout call --------------------------- + ----------------------------- Captured stdout call ----------------------------- second - 1 failed in 0.01 seconds + 1 failed in 0.12 seconds You can see that the command line option arrived in our test. This completes the basic pattern. However, one often rather wants to process @@ -107,12 +107,12 @@ directory with the above conftest.py:: $ py.test - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-162, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 0 items - - ============================= in 0.00 seconds ============================= + + ======= in 0.12 seconds ======== .. _`excontrolskip`: @@ -152,28 +152,28 @@ and when running it will see a skipped "slow" test:: $ py.test -rs # "-rs" means report details on the little 's' - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-162, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 2 items - + test_module.py .s - ========================= short test summary info ========================== - SKIP [1] /tmp/doc-exec-162/conftest.py:9: need --runslow option to run - - =================== 1 passed, 1 skipped in 0.01 seconds ==================== + ======= short test summary info ======== + SKIP [1] $REGENDOC_TMPDIR/conftest.py:9: need --runslow option to run + + ======= 1 passed, 1 skipped in 0.12 seconds ======== Or run it including the ``slow`` marked test:: $ py.test --runslow - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-162, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 2 items - + test_module.py .. - - ========================= 2 passed in 0.01 seconds ========================= + + ======= 2 passed in 0.12 seconds ======== Writing well integrated assertion helpers -------------------------------------------------- @@ -203,15 +203,15 @@ $ py.test -q test_checkconfig.py F - ================================= FAILURES ================================= - ______________________________ test_something ______________________________ - + ======= FAILURES ======== + _______ test_something ________ + def test_something(): > checkconfig(42) E Failed: not configured: 42 - + test_checkconfig.py:8: Failed - 1 failed in 0.02 seconds + 1 failed in 0.12 seconds Detect if running from within a pytest run -------------------------------------------------------------- @@ -258,13 +258,13 @@ which will add the string to the test header accordingly:: $ py.test - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-162, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 project deps: mylib-1.1 + rootdir: $REGENDOC_TMPDIR, inifile: collected 0 items - - ============================= in 0.00 seconds ============================= + + ======= in 0.12 seconds ======== .. regendoc:wipe @@ -282,24 +282,24 @@ which will add info only when run with "--v":: $ py.test -v - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 -- /tmp/sandbox/pytest/.tox/regen/bin/python3.4 - rootdir: /tmp/doc-exec-162, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 -- $PWD/.env/bin/python2.7 info1: did you know that ... did you? + rootdir: $REGENDOC_TMPDIR, inifile: collecting ... collected 0 items - - ============================= in 0.00 seconds ============================= + + ======= in 0.12 seconds ======== and nothing when run plainly:: $ py.test - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-162, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 0 items - - ============================= in 0.00 seconds ============================= + + ======= in 0.12 seconds ======== profiling test duration -------------------------- @@ -327,18 +327,18 @@ Now we can profile which test functions execute the slowest:: $ py.test --durations=3 - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-162, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 3 items - + test_some_are_slow.py ... - - ========================= slowest 3 test durations ========================= + + ======= slowest 3 test durations ======== 0.20s call test_some_are_slow.py::test_funcslow2 0.10s call test_some_are_slow.py::test_funcslow1 - 0.00s setup test_some_are_slow.py::test_funcslow2 - ========================= 3 passed in 0.31 seconds ========================= + 0.00s setup test_some_are_slow.py::test_funcfast + ======= 3 passed in 0.12 seconds ======== incremental testing - test steps --------------------------------------------------- @@ -389,27 +389,27 @@ If we run this:: $ py.test -rx - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-162, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 4 items - + test_step.py .Fx. - - ================================= FAILURES ================================= - ____________________ TestUserHandling.test_modification ____________________ - - self = - + + ======= FAILURES ======== + _______ TestUserHandling.test_modification ________ + + self = + def test_modification(self): > assert 0 E assert 0 - + test_step.py:9: AssertionError - ========================= short test summary info ========================== + ======= short test summary info ======== XFAIL test_step.py::TestUserHandling::()::test_deletion reason: previous test failed (test_modification) - ============== 1 failed, 2 passed, 1 xfailed in 0.02 seconds =============== + ======= 1 failed, 2 passed, 1 xfailed in 0.12 seconds ======== We'll see that ``test_deletion`` was not executed because ``test_modification`` failed. It is reported as an "expected failure". @@ -460,56 +460,56 @@ We can run this:: $ py.test - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-162, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 7 items - + test_step.py .Fx. a/test_db.py F a/test_db2.py F b/test_error.py E - - ================================== ERRORS ================================== - _______________________ ERROR at setup of test_root ________________________ - file /tmp/doc-exec-162/b/test_error.py, line 1 + + ======= ERRORS ======== + _______ ERROR at setup of test_root ________ + file $REGENDOC_TMPDIR/b/test_error.py, line 1 def test_root(db): # no db here, will error out fixture 'db' not found - available fixtures: pytestconfig, capsys, recwarn, monkeypatch, tmpdir, capfd + available fixtures: pytestconfig, recwarn, monkeypatch, capfd, capsys, tmpdir use 'py.test --fixtures [testpath]' for help on them. - - /tmp/doc-exec-162/b/test_error.py:1 - ================================= FAILURES ================================= - ____________________ TestUserHandling.test_modification ____________________ - - self = - + + $REGENDOC_TMPDIR/b/test_error.py:1 + ======= FAILURES ======== + _______ TestUserHandling.test_modification ________ + + self = + def test_modification(self): > assert 0 E assert 0 - + test_step.py:9: AssertionError - _________________________________ test_a1 __________________________________ - - db = - + _______ test_a1 ________ + + db = + def test_a1(db): > assert 0, db # to show value - E AssertionError: + E AssertionError: E assert 0 - + a/test_db.py:2: AssertionError - _________________________________ test_a2 __________________________________ - - db = - + _______ test_a2 ________ + + db = + def test_a2(db): > assert 0, db # to show value - E AssertionError: + E AssertionError: E assert 0 - + a/test_db2.py:2: AssertionError - ========== 3 failed, 2 passed, 1 xfailed, 1 error in 0.05 seconds ========== + ======= 3 failed, 2 passed, 1 xfailed, 1 error in 0.12 seconds ======== The two test modules in the ``a`` directory see the same ``db`` fixture instance while the one test in the sister-directory ``b`` doesn't see it. We could of course @@ -563,37 +563,36 @@ and run them:: $ py.test test_module.py - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-162, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 2 items - + test_module.py FF - - ================================= FAILURES ================================= - ________________________________ test_fail1 ________________________________ - - tmpdir = local('/tmp/pytest-22/test_fail10') - + + ======= FAILURES ======== + _______ test_fail1 ________ + + tmpdir = local('/tmp/pytest-NaN/test_fail10') + def test_fail1(tmpdir): > assert 0 E assert 0 - + test_module.py:2: AssertionError - ________________________________ test_fail2 ________________________________ - + _______ test_fail2 ________ + def test_fail2(): > assert 0 E assert 0 - + test_module.py:4: AssertionError - ========================= 2 failed in 0.02 seconds ========================= + ======= 2 failed in 0.12 seconds ======== you will have a "failures" file which contains the failing test ids:: $ cat failures - test_module.py::test_fail1 (/tmp/pytest-22/test_fail10) - test_module.py::test_fail2 + cat: failures: No such file or directory Making test result information available in fixtures ----------------------------------------------------------- @@ -654,42 +653,42 @@ and run it:: $ py.test -s test_module.py - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-162, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 3 items - - test_module.py Esetting up a test failed! test_module.py::test_setup_fails - Fexecuting test failed test_module.py::test_call_fails + + test_module.py E('setting up a test failed!', 'test_module.py::test_setup_fails') + F('executing test failed', 'test_module.py::test_call_fails') F - - ================================== ERRORS ================================== - ____________________ ERROR at setup of test_setup_fails ____________________ - + + ======= ERRORS ======== + _______ ERROR at setup of test_setup_fails ________ + @pytest.fixture def other(): > assert 0 E assert 0 - + test_module.py:6: AssertionError - ================================= FAILURES ================================= - _____________________________ test_call_fails ______________________________ - + ======= FAILURES ======== + _______ test_call_fails ________ + something = None - + def test_call_fails(something): > assert 0 E assert 0 - + test_module.py:12: AssertionError - ________________________________ test_fail2 ________________________________ - + _______ test_fail2 ________ + def test_fail2(): > assert 0 E assert 0 - + test_module.py:15: AssertionError - ==================== 2 failed, 1 error in 0.02 seconds ===================== + ======= 2 failed, 1 warnings, 1 error in 0.12 seconds ======== You'll see that the fixture finalizers could use the precise reporting information. @@ -744,4 +743,4 @@ application, using standard ``py.test`` command-line options:: $ ./app_main --pytest --verbose --tb=long --junit-xml=results.xml test-suite/ - /bin/sh: 1: ./app_main: not found + /bin/sh: ./app_main: No such file or directory diff -r d16d44694393332a23621766f9d6e08f0c48a0b3 -r f4815f2e0d3057ebfdf4fba33a0ee6ad3a8ac6ca doc/en/example/special.txt --- a/doc/en/example/special.txt +++ b/doc/en/example/special.txt @@ -69,4 +69,4 @@ .test other .test_unit1 method called . - 4 passed in 0.03 seconds + 4 passed in 0.12 seconds This diff is so big that we needed to truncate the remainder. https://bitbucket.org/pytest-dev/pytest/commits/d6e31c7a2be2/ Changeset: d6e31c7a2be2 User: flub Date: 2015-06-10 09:44:38+00:00 Summary: Merged in RonnyPfannschmidt/pytest/regendoc-upgrade (pull request #303) use regendoc normalization and regenerate docs Affected #: 20 files diff -r 614bb548b53695323a83069fe6b4584f4a02f65d -r d6e31c7a2be21b51b813c024d5444d5a96e496ea Makefile --- a/Makefile +++ b/Makefile @@ -1,6 +1,11 @@ # Set of targets useful for development/release process PYTHON = python2.7 PATH := $(PWD)/.env/bin:$(PATH) +REGENDOC_ARGS := \ + --normalize "/={8,} (.*) ={8,}/======= \1 ========/" \ + --normalize "/_{8,} (.*) _{8,}/_______ \1 ________/" \ + --normalize "/in \d+.\d+ seconds/in 0.12 seconds/" \ + --normalize "@/tmp/pytest-\d+/@/tmp/pytest-NaN/@" # prepare virtual python environment .env: @@ -16,10 +21,11 @@ # generate documentation docs: develop - find doc/en -name '*.txt' -not -path 'doc/en/_build/*' | xargs .env/bin/regendoc + find doc/en -name '*.txt' -not -path 'doc/en/_build/*' | xargs .env/bin/regendoc ${REGENDOC_ARGS} cd doc/en; make html # upload documentation upload-docs: develop - find doc/en -name '*.txt' -not -path 'doc/en/_build/*' | xargs .env/bin/regendoc --update - cd doc/en; make install + find doc/en -name '*.txt' -not -path 'doc/en/_build/*' | xargs .env/bin/regendoc ${REGENDOC_ARGS} --update + #cd doc/en; make install + diff -r 614bb548b53695323a83069fe6b4584f4a02f65d -r d6e31c7a2be21b51b813c024d5444d5a96e496ea doc/en/assert.txt --- a/doc/en/assert.txt +++ b/doc/en/assert.txt @@ -25,15 +25,15 @@ you will see the return value of the function call:: $ py.test test_assert1.py - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-87, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 1 items test_assert1.py F - ================================= FAILURES ================================= - ______________________________ test_function _______________________________ + ======= FAILURES ======== + _______ test_function ________ def test_function(): > assert f() == 4 @@ -41,7 +41,7 @@ E + where 3 = f() test_assert1.py:5: AssertionError - ========================= 1 failed in 0.01 seconds ========================= + ======= 1 failed in 0.12 seconds ======== ``pytest`` has support for showing the values of the most common subexpressions including calls, attributes, comparisons, and binary and unary @@ -135,15 +135,15 @@ if you run this module:: $ py.test test_assert2.py - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-87, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 1 items test_assert2.py F - ================================= FAILURES ================================= - ___________________________ test_set_comparison ____________________________ + ======= FAILURES ======== + _______ test_set_comparison ________ def test_set_comparison(): set1 = set("1308") @@ -157,7 +157,7 @@ E Use -v to get the full diff test_assert2.py:5: AssertionError - ========================= 1 failed in 0.01 seconds ========================= + ======= 1 failed in 0.12 seconds ======== Special comparisons are done for a number of cases: @@ -202,8 +202,8 @@ $ py.test -q test_foocompare.py F - ================================= FAILURES ================================= - _______________________________ test_compare _______________________________ + ======= FAILURES ======== + _______ test_compare ________ def test_compare(): f1 = Foo(1) @@ -213,7 +213,7 @@ E vals: 1 != 2 test_foocompare.py:8: AssertionError - 1 failed in 0.01 seconds + 1 failed in 0.12 seconds .. _assert-details: .. _`assert introspection`: diff -r 614bb548b53695323a83069fe6b4584f4a02f65d -r d6e31c7a2be21b51b813c024d5444d5a96e496ea doc/en/builtin.txt --- a/doc/en/builtin.txt +++ b/doc/en/builtin.txt @@ -115,4 +115,4 @@ directory. The returned object is a `py.path.local`_ path object. - in 0.00 seconds + in 0.12 seconds diff -r 614bb548b53695323a83069fe6b4584f4a02f65d -r d6e31c7a2be21b51b813c024d5444d5a96e496ea doc/en/capture.txt --- a/doc/en/capture.txt +++ b/doc/en/capture.txt @@ -63,24 +63,24 @@ of the failing function and hide the other one:: $ py.test - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-90, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 2 items test_module.py .F - ================================= FAILURES ================================= - ________________________________ test_func2 ________________________________ + ======= FAILURES ======== + _______ test_func2 ________ def test_func2(): > assert False E assert False test_module.py:9: AssertionError - -------------------------- Captured stdout setup --------------------------- - setting up - ==================== 1 failed, 1 passed in 0.01 seconds ==================== + ---------------------------- Captured stdout setup ----------------------------- + setting up + ======= 1 failed, 1 passed in 0.12 seconds ======== Accessing captured output from a test function --------------------------------------------------- diff -r 614bb548b53695323a83069fe6b4584f4a02f65d -r d6e31c7a2be21b51b813c024d5444d5a96e496ea doc/en/doctest.txt --- a/doc/en/doctest.txt +++ b/doc/en/doctest.txt @@ -43,14 +43,14 @@ then you can just invoke ``py.test`` without command line options:: $ py.test - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-96, inifile: pytest.ini + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini collected 1 items mymodule.py . - ========================= 1 passed in 0.06 seconds ========================= + ======= 1 passed in 0.12 seconds ======== It is possible to use fixtures using the ``getfixture`` helper:: diff -r 614bb548b53695323a83069fe6b4584f4a02f65d -r d6e31c7a2be21b51b813c024d5444d5a96e496ea doc/en/example/markers.txt --- a/doc/en/example/markers.txt +++ b/doc/en/example/markers.txt @@ -30,30 +30,30 @@ You can then restrict a test run to only run tests marked with ``webtest``:: $ py.test -v -m webtest - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 -- /tmp/sandbox/pytest/.tox/regen/bin/python3.4 - rootdir: /tmp/doc-exec-157, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 -- $PWD/.env/bin/python2.7 + rootdir: $REGENDOC_TMPDIR, inifile: collecting ... collected 4 items test_server.py::test_send_http PASSED - =================== 3 tests deselected by "-m 'webtest'" =================== - ================== 1 passed, 3 deselected in 0.01 seconds ================== + ======= 3 tests deselected by "-m 'webtest'" ======== + ======= 1 passed, 3 deselected in 0.12 seconds ======== Or the inverse, running all tests except the webtest ones:: $ py.test -v -m "not webtest" - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 -- /tmp/sandbox/pytest/.tox/regen/bin/python3.4 - rootdir: /tmp/doc-exec-157, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 -- $PWD/.env/bin/python2.7 + rootdir: $REGENDOC_TMPDIR, inifile: collecting ... collected 4 items test_server.py::test_something_quick PASSED test_server.py::test_another PASSED test_server.py::TestClass::test_method PASSED - ================= 1 tests deselected by "-m 'not webtest'" ================= - ================== 3 passed, 1 deselected in 0.01 seconds ================== + ======= 1 tests deselected by "-m 'not webtest'" ======== + ======= 3 passed, 1 deselected in 0.12 seconds ======== Selecting tests based on their node ID -------------------------------------- @@ -63,39 +63,39 @@ tests based on their module, class, method, or function name:: $ py.test -v test_server.py::TestClass::test_method - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 -- /tmp/sandbox/pytest/.tox/regen/bin/python3.4 - rootdir: /tmp/doc-exec-157, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 -- $PWD/.env/bin/python2.7 + rootdir: $REGENDOC_TMPDIR, inifile: collecting ... collected 5 items test_server.py::TestClass::test_method PASSED - ========================= 1 passed in 0.01 seconds ========================= + ======= 1 passed in 0.12 seconds ======== You can also select on the class:: $ py.test -v test_server.py::TestClass - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 -- /tmp/sandbox/pytest/.tox/regen/bin/python3.4 - rootdir: /tmp/doc-exec-157, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 -- $PWD/.env/bin/python2.7 + rootdir: $REGENDOC_TMPDIR, inifile: collecting ... collected 4 items test_server.py::TestClass::test_method PASSED - ========================= 1 passed in 0.01 seconds ========================= + ======= 1 passed in 0.12 seconds ======== Or select multiple nodes:: $ py.test -v test_server.py::TestClass test_server.py::test_send_http - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 -- /tmp/sandbox/pytest/.tox/regen/bin/python3.4 - rootdir: /tmp/doc-exec-157, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 -- $PWD/.env/bin/python2.7 + rootdir: $REGENDOC_TMPDIR, inifile: collecting ... collected 8 items test_server.py::TestClass::test_method PASSED test_server.py::test_send_http PASSED - ========================= 2 passed in 0.01 seconds ========================= + ======= 2 passed in 0.12 seconds ======== .. _node-id: @@ -124,44 +124,44 @@ select tests based on their names:: $ py.test -v -k http # running with the above defined example module - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 -- /tmp/sandbox/pytest/.tox/regen/bin/python3.4 - rootdir: /tmp/doc-exec-157, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 -- $PWD/.env/bin/python2.7 + rootdir: $REGENDOC_TMPDIR, inifile: collecting ... collected 4 items test_server.py::test_send_http PASSED - ====================== 3 tests deselected by '-khttp' ====================== - ================== 1 passed, 3 deselected in 0.01 seconds ================== + ======= 3 tests deselected by '-khttp' ======== + ======= 1 passed, 3 deselected in 0.12 seconds ======== And you can also run all tests except the ones that match the keyword:: $ py.test -k "not send_http" -v - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 -- /tmp/sandbox/pytest/.tox/regen/bin/python3.4 - rootdir: /tmp/doc-exec-157, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 -- $PWD/.env/bin/python2.7 + rootdir: $REGENDOC_TMPDIR, inifile: collecting ... collected 4 items test_server.py::test_something_quick PASSED test_server.py::test_another PASSED test_server.py::TestClass::test_method PASSED - ================= 1 tests deselected by '-knot send_http' ================== - ================== 3 passed, 1 deselected in 0.01 seconds ================== + ======= 1 tests deselected by '-knot send_http' ======== + ======= 3 passed, 1 deselected in 0.12 seconds ======== Or to select "http" and "quick" tests:: $ py.test -k "http or quick" -v - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 -- /tmp/sandbox/pytest/.tox/regen/bin/python3.4 - rootdir: /tmp/doc-exec-157, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 -- $PWD/.env/bin/python2.7 + rootdir: $REGENDOC_TMPDIR, inifile: collecting ... collected 4 items test_server.py::test_send_http PASSED test_server.py::test_something_quick PASSED - ================= 2 tests deselected by '-khttp or quick' ================== - ================== 2 passed, 2 deselected in 0.01 seconds ================== + ======= 2 tests deselected by '-khttp or quick' ======== + ======= 2 passed, 2 deselected in 0.12 seconds ======== .. note:: @@ -201,9 +201,9 @@ @pytest.mark.usefixtures(fixturename1, fixturename2, ...): mark tests as needing all of the specified fixtures. see http://pytest.org/latest/fixture.html#usefixtures - @pytest.hookimpl(tryfirst=True): mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible. + @pytest.mark.tryfirst: mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible. - @pytest.hookimpl(trylast=True): mark a hook implementation function such that the plugin machinery will try to call it last/as late as possible. + @pytest.mark.trylast: mark a hook implementation function such that the plugin machinery will try to call it last/as late as possible. For an example on how to add and work with markers from a plugin, see @@ -341,26 +341,26 @@ the test needs:: $ py.test -E stage2 - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-157, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 1 items test_someenv.py s - ======================== 1 skipped in 0.01 seconds ========================= + ======= 1 skipped in 0.12 seconds ======== and here is one that specifies exactly the environment needed:: $ py.test -E stage1 - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-157, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 1 items test_someenv.py . - ========================= 1 passed in 0.01 seconds ========================= + ======= 1 passed in 0.12 seconds ======== The ``--markers`` option always gives you a list of available markers:: @@ -375,9 +375,9 @@ @pytest.mark.usefixtures(fixturename1, fixturename2, ...): mark tests as needing all of the specified fixtures. see http://pytest.org/latest/fixture.html#usefixtures - @pytest.hookimpl(tryfirst=True): mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible. + @pytest.mark.tryfirst: mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible. - @pytest.hookimpl(trylast=True): mark a hook implementation function such that the plugin machinery will try to call it last/as late as possible. + @pytest.mark.trylast: mark a hook implementation function such that the plugin machinery will try to call it last/as late as possible. Reading markers which were set from multiple places @@ -420,7 +420,7 @@ glob args=('class',) kwargs={'x': 2} glob args=('module',) kwargs={'x': 1} . - 1 passed in 0.01 seconds + 1 passed in 0.12 seconds marking platform specific tests with pytest -------------------------------------------------------------- @@ -472,29 +472,29 @@ then you will see two test skipped and two executed tests as expected:: $ py.test -rs # this option reports skip reasons - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-157, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 4 items - test_plat.py sss. - ========================= short test summary info ========================== - SKIP [3] /tmp/doc-exec-157/conftest.py:12: cannot run on platform linux + test_plat.py s.s. + ======= short test summary info ======== + SKIP [2] $REGENDOC_TMPDIR/conftest.py:12: cannot run on platform linux2 - =================== 1 passed, 3 skipped in 0.01 seconds ==================== + ======= 2 passed, 2 skipped in 0.12 seconds ======== Note that if you specify a platform via the marker-command line option like this:: $ py.test -m linux2 - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-157, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 4 items - test_plat.py s + test_plat.py . - =================== 3 tests deselected by "-m 'linux2'" ==================== - ================= 1 skipped, 3 deselected in 0.01 seconds ================== + ======= 3 tests deselected by "-m 'linux2'" ======== + ======= 1 passed, 3 deselected in 0.12 seconds ======== then the unmarked-tests will not be run. It is thus a way to restrict the run to the specific tests. @@ -538,47 +538,47 @@ We can now use the ``-m option`` to select one set:: $ py.test -m interface --tb=short - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-157, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 4 items test_module.py FF - ================================= FAILURES ================================= - __________________________ test_interface_simple ___________________________ + ======= FAILURES ======== + _______ test_interface_simple ________ test_module.py:3: in test_interface_simple assert 0 E assert 0 - __________________________ test_interface_complex __________________________ + _______ test_interface_complex ________ test_module.py:6: in test_interface_complex assert 0 E assert 0 - ================== 2 tests deselected by "-m 'interface'" ================== - ================== 2 failed, 2 deselected in 0.02 seconds ================== + ======= 2 tests deselected by "-m 'interface'" ======== + ======= 2 failed, 2 deselected in 0.12 seconds ======== or to select both "event" and "interface" tests:: $ py.test -m "interface or event" --tb=short - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-157, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 4 items test_module.py FFF - ================================= FAILURES ================================= - __________________________ test_interface_simple ___________________________ + ======= FAILURES ======== + _______ test_interface_simple ________ test_module.py:3: in test_interface_simple assert 0 E assert 0 - __________________________ test_interface_complex __________________________ + _______ test_interface_complex ________ test_module.py:6: in test_interface_complex assert 0 E assert 0 - ____________________________ test_event_simple _____________________________ + _______ test_event_simple ________ test_module.py:9: in test_event_simple assert 0 E assert 0 - ============= 1 tests deselected by "-m 'interface or event'" ============== - ================== 3 failed, 1 deselected in 0.02 seconds ================== + ======= 1 tests deselected by "-m 'interface or event'" ======== + ======= 3 failed, 1 deselected in 0.12 seconds ======== diff -r 614bb548b53695323a83069fe6b4584f4a02f65d -r d6e31c7a2be21b51b813c024d5444d5a96e496ea doc/en/example/nonpython.txt --- a/doc/en/example/nonpython.txt +++ b/doc/en/example/nonpython.txt @@ -26,19 +26,19 @@ now execute the test specification:: nonpython $ py.test test_simple.yml - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/sandbox/pytest/doc/en, inifile: pytest.ini + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $PWD/doc/en, inifile: pytest.ini collected 2 items test_simple.yml .F - ================================= FAILURES ================================= - ______________________________ usecase: hello ______________________________ + ======= FAILURES ======== + _______ usecase: hello ________ usecase execution failed spec failed: 'some': 'other' no further details known at this point. - ==================== 1 failed, 1 passed in 0.19 seconds ==================== + ======= 1 failed, 1 passed in 0.12 seconds ======== You get one dot for the passing ``sub1: sub1`` check and one failure. Obviously in the above ``conftest.py`` you'll want to implement a more @@ -56,31 +56,31 @@ consulted when reporting in ``verbose`` mode:: nonpython $ py.test -v - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 -- /tmp/sandbox/pytest/.tox/regen/bin/python3.4 - rootdir: /tmp/sandbox/pytest/doc/en, inifile: pytest.ini + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 -- $PWD/.env/bin/python2.7 + rootdir: $PWD/doc/en, inifile: pytest.ini collecting ... collected 2 items test_simple.yml::ok PASSED test_simple.yml::hello FAILED - ================================= FAILURES ================================= - ______________________________ usecase: hello ______________________________ + ======= FAILURES ======== + _______ usecase: hello ________ usecase execution failed spec failed: 'some': 'other' no further details known at this point. - ==================== 1 failed, 1 passed in 0.05 seconds ==================== + ======= 1 failed, 1 passed in 0.12 seconds ======== While developing your custom test collection and execution it's also interesting to just look at the collection tree:: nonpython $ py.test --collect-only - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/sandbox/pytest/doc/en, inifile: pytest.ini + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $PWD/doc/en, inifile: pytest.ini collected 2 items - ============================= in 0.04 seconds ============================= + ======= in 0.12 seconds ======== diff -r 614bb548b53695323a83069fe6b4584f4a02f65d -r d6e31c7a2be21b51b813c024d5444d5a96e496ea doc/en/example/parametrize.txt --- a/doc/en/example/parametrize.txt +++ b/doc/en/example/parametrize.txt @@ -46,15 +46,15 @@ $ py.test -q test_compute.py .. - 2 passed in 0.01 seconds + 2 passed in 0.12 seconds We run only two computations, so we see two dots. let's run the full monty:: $ py.test -q --all ....F - ================================= FAILURES ================================= - _____________________________ test_compute[4] ______________________________ + ======= FAILURES ======== + _______ test_compute[4] ________ param1 = 4 @@ -63,7 +63,7 @@ E assert 4 < 4 test_compute.py:3: AssertionError - 1 failed, 4 passed in 0.02 seconds + 1 failed, 4 passed in 0.12 seconds As expected when running the full range of ``param1`` values we'll get an error on the last one. @@ -126,11 +126,11 @@ $ py.test test_time.py --collect-only - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-159, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: - ============================= in 0.00 seconds ============================= + ======= in 0.12 seconds ======== ERROR: file not found: test_time.py A quick port of "testscenarios" @@ -170,22 +170,22 @@ this is a fully self-contained example which you can run with:: $ py.test test_scenarios.py - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-159, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 4 items test_scenarios.py .... - ========================= 4 passed in 0.02 seconds ========================= + ======= 4 passed in 0.12 seconds ======== If you just collect tests you'll also nicely see 'advanced' and 'basic' as variants for the test function:: $ py.test --collect-only test_scenarios.py - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-159, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 4 items @@ -195,7 +195,7 @@ - ============================= in 0.01 seconds ============================= + ======= in 0.12 seconds ======== Note that we told ``metafunc.parametrize()`` that your scenario values should be considered class-scoped. With pytest-2.3 this leads to a @@ -248,24 +248,24 @@ Let's first see how it looks like at collection time:: $ py.test test_backends.py --collect-only - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-159, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 2 items - ============================= in 0.01 seconds ============================= + ======= in 0.12 seconds ======== And then when we run the test:: $ py.test -q test_backends.py .F - ================================= FAILURES ================================= - _________________________ test_db_initialized[d2] __________________________ + ======= FAILURES ======== + _______ test_db_initialized[d2] ________ - db = + db = def test_db_initialized(db): # a dummy test @@ -274,7 +274,7 @@ E Failed: deliberately failing for demo purposes test_backends.py:6: Failed - 1 failed, 1 passed in 0.01 seconds + 1 failed, 1 passed in 0.12 seconds The first invocation with ``db == "DB1"`` passed while the second with ``db == "DB2"`` failed. Our ``db`` fixture function has instantiated each of the DB values during the setup phase while the ``pytest_generate_tests`` generated two according calls to the ``test_db_initialized`` during the collection phase. @@ -318,17 +318,17 @@ $ py.test -q F.. - ================================= FAILURES ================================= - ________________________ TestClass.test_equals[2-1] ________________________ + ======= FAILURES ======== + _______ TestClass.test_equals[1-2] ________ - self = , a = 1, b = 2 + self = , a = 1, b = 2 def test_equals(self, a, b): > assert a == b E assert 1 == 2 test_parametrize.py:18: AssertionError - 1 failed, 2 passed in 0.02 seconds + 1 failed, 2 passed in 0.12 seconds Indirect parametrization with multiple fixtures -------------------------------------------------------------- @@ -347,8 +347,11 @@ Running it results in some skips if we don't have all the python interpreters installed and otherwise runs all combinations (5 interpreters times 5 interpreters times 3 objects to serialize/deserialize):: . $ py.test -rs -q multipython.py - ........................... - 27 passed in 4.14 seconds + ssssssssssss...ssssssssssss + ======= short test summary info ======== + SKIP [12] $PWD/doc/en/example/multipython.py:22: 'python3.3' not found + SKIP [12] $PWD/doc/en/example/multipython.py:22: 'python2.6' not found + 3 passed, 24 skipped in 0.12 seconds Indirect parametrization of optional implementations/imports -------------------------------------------------------------------- @@ -394,16 +397,16 @@ If you run this with reporting for skips enabled:: $ py.test -rs test_module.py - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-159, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 2 items test_module.py .s - ========================= short test summary info ========================== - SKIP [1] /tmp/doc-exec-159/conftest.py:10: could not import 'opt2' + ======= short test summary info ======== + SKIP [1] $REGENDOC_TMPDIR/conftest.py:10: could not import 'opt2' - =================== 1 passed, 1 skipped in 0.01 seconds ==================== + ======= 1 passed, 1 skipped in 0.12 seconds ======== You'll see that we don't have a ``opt2`` module and thus the second test run of our ``test_func1`` was skipped. A few notes: diff -r 614bb548b53695323a83069fe6b4584f4a02f65d -r d6e31c7a2be21b51b813c024d5444d5a96e496ea doc/en/example/pythoncollection.txt --- a/doc/en/example/pythoncollection.txt +++ b/doc/en/example/pythoncollection.txt @@ -42,9 +42,9 @@ then the test collection looks like this:: $ py.test --collect-only - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-160, inifile: setup.cfg + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: setup.cfg collected 2 items @@ -52,7 +52,7 @@ - ============================= in 0.01 seconds ============================= + ======= in 0.12 seconds ======== .. note:: @@ -88,9 +88,9 @@ You can always peek at the collection tree without running tests like this:: . $ py.test --collect-only pythoncollection.py - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/sandbox/pytest/doc/en, inifile: pytest.ini + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $PWD/doc/en, inifile: pytest.ini collected 3 items @@ -99,7 +99,7 @@ - ============================= in 0.01 seconds ============================= + ======= in 0.12 seconds ======== customizing test collection to find all .py files --------------------------------------------------------- @@ -142,12 +142,14 @@ interpreters and will leave out the setup.py file:: $ py.test --collect-only - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-160, inifile: pytest.ini - collected 0 items + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini + collected 1 items + + - ============================= in 0.01 seconds ============================= + ======= in 0.12 seconds ======== If you run with a Python3 interpreter the moduled added through the conftest.py file will not be considered for test collection. diff -r 614bb548b53695323a83069fe6b4584f4a02f65d -r d6e31c7a2be21b51b813c024d5444d5a96e496ea doc/en/example/reportingdemo.txt --- a/doc/en/example/reportingdemo.txt +++ b/doc/en/example/reportingdemo.txt @@ -12,15 +12,15 @@ .. code-block:: python assertion $ py.test failure_demo.py - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/sandbox/pytest/doc/en, inifile: pytest.ini + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $PWD/doc/en, inifile: pytest.ini collected 42 items failure_demo.py FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF - ================================= FAILURES ================================= - ____________________________ test_generative[0] ____________________________ + ======= FAILURES ======== + _______ test_generative[0] ________ param1 = 3, param2 = 6 @@ -29,9 +29,9 @@ E assert (3 * 2) < 6 failure_demo.py:15: AssertionError - _________________________ TestFailing.test_simple __________________________ + _______ TestFailing.test_simple ________ - self = + self = def test_simple(self): def f(): @@ -41,13 +41,13 @@ > assert f() == g() E assert 42 == 43 - E + where 42 = .f at 0x7f65f2315510>() - E + and 43 = .g at 0x7f65f2323510>() + E + where 42 = () + E + and 43 = () failure_demo.py:28: AssertionError - ____________________ TestFailing.test_simple_multiline _____________________ + _______ TestFailing.test_simple_multiline ________ - self = + self = def test_simple_multiline(self): otherfunc_multi( @@ -55,7 +55,7 @@ > 6*9) failure_demo.py:33: - _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ + _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ a = 42, b = 54 @@ -65,21 +65,21 @@ E assert 42 == 54 failure_demo.py:11: AssertionError - ___________________________ TestFailing.test_not ___________________________ + _______ TestFailing.test_not ________ - self = + self = def test_not(self): def f(): return 42 > assert not f() E assert not 42 - E + where 42 = .f at 0x7f65f2323598>() + E + where 42 = () failure_demo.py:38: AssertionError - _________________ TestSpecialisedExplanations.test_eq_text _________________ + _______ TestSpecialisedExplanations.test_eq_text ________ - self = + self = def test_eq_text(self): > assert 'spam' == 'eggs' @@ -88,9 +88,9 @@ E + eggs failure_demo.py:42: AssertionError - _____________ TestSpecialisedExplanations.test_eq_similar_text _____________ + _______ TestSpecialisedExplanations.test_eq_similar_text ________ - self = + self = def test_eq_similar_text(self): > assert 'foo 1 bar' == 'foo 2 bar' @@ -101,9 +101,9 @@ E ? ^ failure_demo.py:45: AssertionError - ____________ TestSpecialisedExplanations.test_eq_multiline_text ____________ + _______ TestSpecialisedExplanations.test_eq_multiline_text ________ - self = + self = def test_eq_multiline_text(self): > assert 'foo\nspam\nbar' == 'foo\neggs\nbar' @@ -114,9 +114,9 @@ E bar failure_demo.py:48: AssertionError - ______________ TestSpecialisedExplanations.test_eq_long_text _______________ + _______ TestSpecialisedExplanations.test_eq_long_text ________ - self = + self = def test_eq_long_text(self): a = '1'*100 + 'a' + '2'*100 @@ -131,9 +131,9 @@ E ? ^ failure_demo.py:53: AssertionError - _________ TestSpecialisedExplanations.test_eq_long_text_multiline __________ + _______ TestSpecialisedExplanations.test_eq_long_text_multiline ________ - self = + self = def test_eq_long_text_multiline(self): a = '1\n'*100 + 'a' + '2\n'*100 @@ -155,9 +155,9 @@ E 2 failure_demo.py:58: AssertionError - _________________ TestSpecialisedExplanations.test_eq_list _________________ + _______ TestSpecialisedExplanations.test_eq_list ________ - self = + self = def test_eq_list(self): > assert [0, 1, 2] == [0, 1, 3] @@ -166,9 +166,9 @@ E Use -v to get the full diff failure_demo.py:61: AssertionError - ______________ TestSpecialisedExplanations.test_eq_list_long _______________ + _______ TestSpecialisedExplanations.test_eq_list_long ________ - self = + self = def test_eq_list_long(self): a = [0]*100 + [1] + [3]*100 @@ -179,9 +179,9 @@ E Use -v to get the full diff failure_demo.py:66: AssertionError - _________________ TestSpecialisedExplanations.test_eq_dict _________________ + _______ TestSpecialisedExplanations.test_eq_dict ________ - self = + self = def test_eq_dict(self): > assert {'a': 0, 'b': 1, 'c': 0} == {'a': 0, 'b': 2, 'd': 0} @@ -196,9 +196,9 @@ E Use -v to get the full diff failure_demo.py:69: AssertionError - _________________ TestSpecialisedExplanations.test_eq_set __________________ + _______ TestSpecialisedExplanations.test_eq_set ________ - self = + self = def test_eq_set(self): > assert set([0, 10, 11, 12]) == set([0, 20, 21]) @@ -213,9 +213,9 @@ E Use -v to get the full diff failure_demo.py:72: AssertionError - _____________ TestSpecialisedExplanations.test_eq_longer_list ______________ + _______ TestSpecialisedExplanations.test_eq_longer_list ________ - self = + self = def test_eq_longer_list(self): > assert [1,2] == [1,2,3] @@ -224,18 +224,18 @@ E Use -v to get the full diff failure_demo.py:75: AssertionError - _________________ TestSpecialisedExplanations.test_in_list _________________ + _______ TestSpecialisedExplanations.test_in_list ________ - self = + self = def test_in_list(self): > assert 1 in [0, 2, 3, 4, 5] E assert 1 in [0, 2, 3, 4, 5] failure_demo.py:78: AssertionError - __________ TestSpecialisedExplanations.test_not_in_text_multiline __________ + _______ TestSpecialisedExplanations.test_not_in_text_multiline ________ - self = + self = def test_not_in_text_multiline(self): text = 'some multiline\ntext\nwhich\nincludes foo\nand a\ntail' @@ -251,9 +251,9 @@ E tail failure_demo.py:82: AssertionError - ___________ TestSpecialisedExplanations.test_not_in_text_single ____________ + _______ TestSpecialisedExplanations.test_not_in_text_single ________ - self = + self = def test_not_in_text_single(self): text = 'single foo line' @@ -264,9 +264,9 @@ E ? +++ failure_demo.py:86: AssertionError - _________ TestSpecialisedExplanations.test_not_in_text_single_long _________ + _______ TestSpecialisedExplanations.test_not_in_text_single_long ________ - self = + self = def test_not_in_text_single_long(self): text = 'head ' * 50 + 'foo ' + 'tail ' * 20 @@ -277,9 +277,9 @@ E ? +++ failure_demo.py:90: AssertionError - ______ TestSpecialisedExplanations.test_not_in_text_single_long_term _______ + _______ TestSpecialisedExplanations.test_not_in_text_single_long_term ________ - self = + self = def test_not_in_text_single_long_term(self): text = 'head ' * 50 + 'f'*70 + 'tail ' * 20 @@ -290,7 +290,7 @@ E ? ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ failure_demo.py:94: AssertionError - ______________________________ test_attribute ______________________________ + _______ test_attribute ________ def test_attribute(): class Foo(object): @@ -298,21 +298,21 @@ i = Foo() > assert i.b == 2 E assert 1 == 2 - E + where 1 = .Foo object at 0x7f65f1c814e0>.b + E + where 1 = .b failure_demo.py:101: AssertionError - _________________________ test_attribute_instance __________________________ + _______ test_attribute_instance ________ def test_attribute_instance(): class Foo(object): b = 1 > assert Foo().b == 2 E assert 1 == 2 - E + where 1 = .Foo object at 0x7f65f1c7f7f0>.b - E + where .Foo object at 0x7f65f1c7f7f0> = .Foo'>() + E + where 1 = .b + E + where = () failure_demo.py:107: AssertionError - __________________________ test_attribute_failure __________________________ + _______ test_attribute_failure ________ def test_attribute_failure(): class Foo(object): @@ -323,16 +323,16 @@ > assert i.b == 2 failure_demo.py:116: - _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ + _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ - self = .Foo object at 0x7f65f1c97dd8> + self = def _get_b(self): > raise Exception('Failed to get attrib') E Exception: Failed to get attrib failure_demo.py:113: Exception - _________________________ test_attribute_multiple __________________________ + _______ test_attribute_multiple ________ def test_attribute_multiple(): class Foo(object): @@ -341,57 +341,57 @@ b = 2 > assert Foo().b == Bar().b E assert 1 == 2 - E + where 1 = .Foo object at 0x7f65f1c9b630>.b - E + where .Foo object at 0x7f65f1c9b630> = .Foo'>() - E + and 2 = .Bar object at 0x7f65f1c9b2b0>.b - E + where .Bar object at 0x7f65f1c9b2b0> = .Bar'>() + E + where 1 = .b + E + where = () + E + and 2 = .b + E + where = () failure_demo.py:124: AssertionError - __________________________ TestRaises.test_raises __________________________ + _______ TestRaises.test_raises ________ - self = + self = def test_raises(self): s = 'qwe' > raises(TypeError, "int(s)") failure_demo.py:133: - _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ + _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > int(s) E ValueError: invalid literal for int() with base 10: 'qwe' - <0-codegen /tmp/sandbox/pytest/.tox/regen/lib/python3.4/site-packages/_pytest/python.py:1075>:1: ValueError - ______________________ TestRaises.test_raises_doesnt _______________________ + <0-codegen $PWD/_pytest/python.py:1091>:1: ValueError + _______ TestRaises.test_raises_doesnt ________ - self = + self = def test_raises_doesnt(self): > raises(IOError, "int('3')") E Failed: DID NOT RAISE failure_demo.py:136: Failed - __________________________ TestRaises.test_raise ___________________________ + _______ TestRaises.test_raise ________ - self = + self = def test_raise(self): > raise ValueError("demo error") E ValueError: demo error failure_demo.py:139: ValueError - ________________________ TestRaises.test_tupleerror ________________________ + _______ TestRaises.test_tupleerror ________ - self = + self = def test_tupleerror(self): > a,b = [1] E ValueError: need more than 1 value to unpack failure_demo.py:142: ValueError - ______ TestRaises.test_reinterpret_fails_with_print_for_the_fun_of_it ______ + _______ TestRaises.test_reinterpret_fails_with_print_for_the_fun_of_it ________ - self = + self = def test_reinterpret_fails_with_print_for_the_fun_of_it(self): l = [1,2,3] @@ -400,18 +400,18 @@ E TypeError: 'int' object is not iterable failure_demo.py:147: TypeError - --------------------------- Captured stdout call --------------------------- + ----------------------------- Captured stdout call ----------------------------- l is [1, 2, 3] - ________________________ TestRaises.test_some_error ________________________ + _______ TestRaises.test_some_error ________ - self = + self = def test_some_error(self): > if namenotexi: - E NameError: name 'namenotexi' is not defined + E NameError: global name 'namenotexi' is not defined failure_demo.py:150: NameError - ____________________ test_dynamic_compile_shows_nicely _____________________ + _______ test_dynamic_compile_shows_nicely ________ def test_dynamic_compile_shows_nicely(): src = 'def foo():\n assert 1 == 0\n' @@ -423,16 +423,16 @@ > module.foo() failure_demo.py:165: - _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ + _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ def foo(): > assert 1 == 0 E assert 1 == 0 - <2-codegen 'abc-123' /tmp/sandbox/pytest/doc/en/example/assertion/failure_demo.py:162>:2: AssertionError - ____________________ TestMoreErrors.test_complex_error _____________________ + <2-codegen 'abc-123' $PWD/doc/en/example/assertion/failure_demo.py:162>:2: AssertionError + _______ TestMoreErrors.test_complex_error ________ - self = + self = def test_complex_error(self): def f(): @@ -442,10 +442,10 @@ > somefunc(f(), g()) failure_demo.py:175: - _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ + _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ failure_demo.py:8: in somefunc otherfunc(x,y) - _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ + _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ a = 44, b = 43 @@ -454,9 +454,9 @@ E assert 44 == 43 failure_demo.py:5: AssertionError - ___________________ TestMoreErrors.test_z1_unpack_error ____________________ + _______ TestMoreErrors.test_z1_unpack_error ________ - self = + self = def test_z1_unpack_error(self): l = [] @@ -464,9 +464,9 @@ E ValueError: need more than 0 values to unpack failure_demo.py:179: ValueError - ____________________ TestMoreErrors.test_z2_type_error _____________________ + _______ TestMoreErrors.test_z2_type_error ________ - self = + self = def test_z2_type_error(self): l = 3 @@ -474,21 +474,21 @@ E TypeError: 'int' object is not iterable failure_demo.py:183: TypeError - ______________________ TestMoreErrors.test_startswith ______________________ + _______ TestMoreErrors.test_startswith ________ - self = + self = def test_startswith(self): s = "123" g = "456" > assert s.startswith(g) - E assert ('456') - E + where = '123'.startswith + E assert ('456') + E + where = '123'.startswith failure_demo.py:188: AssertionError - __________________ TestMoreErrors.test_startswith_nested ___________________ + _______ TestMoreErrors.test_startswith_nested ________ - self = + self = def test_startswith_nested(self): def f(): @@ -496,15 +496,15 @@ def g(): return "456" > assert f().startswith(g()) - E assert ('456') - E + where = '123'.startswith - E + where '123' = .f at 0x7f65f1c32950>() - E + and '456' = .g at 0x7f65f1c32ea0>() + E assert ('456') + E + where = '123'.startswith + E + where '123' = () + E + and '456' = () failure_demo.py:195: AssertionError - _____________________ TestMoreErrors.test_global_func ______________________ + _______ TestMoreErrors.test_global_func ________ - self = + self = def test_global_func(self): > assert isinstance(globf(42), float) @@ -512,20 +512,20 @@ E + where 43 = globf(42) failure_demo.py:198: AssertionError - _______________________ TestMoreErrors.test_instance _______________________ + _______ TestMoreErrors.test_instance ________ - self = + self = def test_instance(self): self.x = 6*7 > assert self.x != 42 E assert 42 != 42 - E + where 42 = .x + E + where 42 = .x failure_demo.py:202: AssertionError - _______________________ TestMoreErrors.test_compare ________________________ + _______ TestMoreErrors.test_compare ________ - self = + self = def test_compare(self): > assert globf(10) < 5 @@ -533,9 +533,9 @@ E + where 11 = globf(10) failure_demo.py:205: AssertionError - _____________________ TestMoreErrors.test_try_finally ______________________ + _______ TestMoreErrors.test_try_finally ________ - self = + self = def test_try_finally(self): x = 1 @@ -544,9 +544,9 @@ E assert 1 == 0 failure_demo.py:210: AssertionError - ___________________ TestCustomAssertMsg.test_single_line ___________________ + _______ TestCustomAssertMsg.test_single_line ________ - self = + self = def test_single_line(self): class A: @@ -555,12 +555,12 @@ > assert A.a == b, "A.a appears not to be b" E AssertionError: A.a appears not to be b E assert 1 == 2 - E + where 1 = .A'>.a + E + where 1 = .a failure_demo.py:221: AssertionError - ____________________ TestCustomAssertMsg.test_multiline ____________________ + _______ TestCustomAssertMsg.test_multiline ________ - self = + self = def test_multiline(self): class A: @@ -572,12 +572,12 @@ E or does not appear to be b E one of those E assert 1 == 2 - E + where 1 = .A'>.a + E + where 1 = .a failure_demo.py:227: AssertionError - ___________________ TestCustomAssertMsg.test_custom_repr ___________________ + _______ TestCustomAssertMsg.test_custom_repr ________ - self = + self = def test_custom_repr(self): class JSON: @@ -595,4 +595,4 @@ E + where 1 = This is JSON\n{\n 'foo': 'bar'\n}.a failure_demo.py:237: AssertionError - ======================== 42 failed in 0.35 seconds ========================= + ======= 42 failed in 0.12 seconds ======== diff -r 614bb548b53695323a83069fe6b4584f4a02f65d -r d6e31c7a2be21b51b813c024d5444d5a96e496ea doc/en/example/simple.txt --- a/doc/en/example/simple.txt +++ b/doc/en/example/simple.txt @@ -39,11 +39,11 @@ $ py.test -q test_sample.py F - ================================= FAILURES ================================= - _______________________________ test_answer ________________________________ - + ======= FAILURES ======== + _______ test_answer ________ + cmdopt = 'type1' - + def test_answer(cmdopt): if cmdopt == "type1": print ("first") @@ -51,21 +51,21 @@ print ("second") > assert 0 # to see what was printed E assert 0 - + test_sample.py:6: AssertionError - --------------------------- Captured stdout call --------------------------- + ----------------------------- Captured stdout call ----------------------------- first - 1 failed in 0.01 seconds + 1 failed in 0.12 seconds And now with supplying a command line option:: $ py.test -q --cmdopt=type2 F - ================================= FAILURES ================================= - _______________________________ test_answer ________________________________ - + ======= FAILURES ======== + _______ test_answer ________ + cmdopt = 'type2' - + def test_answer(cmdopt): if cmdopt == "type1": print ("first") @@ -73,11 +73,11 @@ print ("second") > assert 0 # to see what was printed E assert 0 - + test_sample.py:6: AssertionError - --------------------------- Captured stdout call --------------------------- + ----------------------------- Captured stdout call ----------------------------- second - 1 failed in 0.01 seconds + 1 failed in 0.12 seconds You can see that the command line option arrived in our test. This completes the basic pattern. However, one often rather wants to process @@ -107,12 +107,12 @@ directory with the above conftest.py:: $ py.test - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-162, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 0 items - - ============================= in 0.00 seconds ============================= + + ======= in 0.12 seconds ======== .. _`excontrolskip`: @@ -152,28 +152,28 @@ and when running it will see a skipped "slow" test:: $ py.test -rs # "-rs" means report details on the little 's' - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-162, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 2 items - + test_module.py .s - ========================= short test summary info ========================== - SKIP [1] /tmp/doc-exec-162/conftest.py:9: need --runslow option to run - - =================== 1 passed, 1 skipped in 0.01 seconds ==================== + ======= short test summary info ======== + SKIP [1] $REGENDOC_TMPDIR/conftest.py:9: need --runslow option to run + + ======= 1 passed, 1 skipped in 0.12 seconds ======== Or run it including the ``slow`` marked test:: $ py.test --runslow - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-162, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 2 items - + test_module.py .. - - ========================= 2 passed in 0.01 seconds ========================= + + ======= 2 passed in 0.12 seconds ======== Writing well integrated assertion helpers -------------------------------------------------- @@ -203,15 +203,15 @@ $ py.test -q test_checkconfig.py F - ================================= FAILURES ================================= - ______________________________ test_something ______________________________ - + ======= FAILURES ======== + _______ test_something ________ + def test_something(): > checkconfig(42) E Failed: not configured: 42 - + test_checkconfig.py:8: Failed - 1 failed in 0.02 seconds + 1 failed in 0.12 seconds Detect if running from within a pytest run -------------------------------------------------------------- @@ -258,13 +258,13 @@ which will add the string to the test header accordingly:: $ py.test - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-162, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 project deps: mylib-1.1 + rootdir: $REGENDOC_TMPDIR, inifile: collected 0 items - - ============================= in 0.00 seconds ============================= + + ======= in 0.12 seconds ======== .. regendoc:wipe @@ -282,24 +282,24 @@ which will add info only when run with "--v":: $ py.test -v - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 -- /tmp/sandbox/pytest/.tox/regen/bin/python3.4 - rootdir: /tmp/doc-exec-162, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 -- $PWD/.env/bin/python2.7 info1: did you know that ... did you? + rootdir: $REGENDOC_TMPDIR, inifile: collecting ... collected 0 items - - ============================= in 0.00 seconds ============================= + + ======= in 0.12 seconds ======== and nothing when run plainly:: $ py.test - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-162, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 0 items - - ============================= in 0.00 seconds ============================= + + ======= in 0.12 seconds ======== profiling test duration -------------------------- @@ -327,18 +327,18 @@ Now we can profile which test functions execute the slowest:: $ py.test --durations=3 - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-162, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 3 items - + test_some_are_slow.py ... - - ========================= slowest 3 test durations ========================= + + ======= slowest 3 test durations ======== 0.20s call test_some_are_slow.py::test_funcslow2 0.10s call test_some_are_slow.py::test_funcslow1 - 0.00s setup test_some_are_slow.py::test_funcslow2 - ========================= 3 passed in 0.31 seconds ========================= + 0.00s setup test_some_are_slow.py::test_funcfast + ======= 3 passed in 0.12 seconds ======== incremental testing - test steps --------------------------------------------------- @@ -389,27 +389,27 @@ If we run this:: $ py.test -rx - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-162, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 4 items - + test_step.py .Fx. - - ================================= FAILURES ================================= - ____________________ TestUserHandling.test_modification ____________________ - - self = - + + ======= FAILURES ======== + _______ TestUserHandling.test_modification ________ + + self = + def test_modification(self): > assert 0 E assert 0 - + test_step.py:9: AssertionError - ========================= short test summary info ========================== + ======= short test summary info ======== XFAIL test_step.py::TestUserHandling::()::test_deletion reason: previous test failed (test_modification) - ============== 1 failed, 2 passed, 1 xfailed in 0.02 seconds =============== + ======= 1 failed, 2 passed, 1 xfailed in 0.12 seconds ======== We'll see that ``test_deletion`` was not executed because ``test_modification`` failed. It is reported as an "expected failure". @@ -460,56 +460,56 @@ We can run this:: $ py.test - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-162, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 7 items - + test_step.py .Fx. a/test_db.py F a/test_db2.py F b/test_error.py E - - ================================== ERRORS ================================== - _______________________ ERROR at setup of test_root ________________________ - file /tmp/doc-exec-162/b/test_error.py, line 1 + + ======= ERRORS ======== + _______ ERROR at setup of test_root ________ + file $REGENDOC_TMPDIR/b/test_error.py, line 1 def test_root(db): # no db here, will error out fixture 'db' not found - available fixtures: pytestconfig, capsys, recwarn, monkeypatch, tmpdir, capfd + available fixtures: pytestconfig, recwarn, monkeypatch, capfd, capsys, tmpdir use 'py.test --fixtures [testpath]' for help on them. - - /tmp/doc-exec-162/b/test_error.py:1 - ================================= FAILURES ================================= - ____________________ TestUserHandling.test_modification ____________________ - - self = - + + $REGENDOC_TMPDIR/b/test_error.py:1 + ======= FAILURES ======== + _______ TestUserHandling.test_modification ________ + + self = + def test_modification(self): > assert 0 E assert 0 - + test_step.py:9: AssertionError - _________________________________ test_a1 __________________________________ - - db = - + _______ test_a1 ________ + + db = + def test_a1(db): > assert 0, db # to show value - E AssertionError: + E AssertionError: E assert 0 - + a/test_db.py:2: AssertionError - _________________________________ test_a2 __________________________________ - - db = - + _______ test_a2 ________ + + db = + def test_a2(db): > assert 0, db # to show value - E AssertionError: + E AssertionError: E assert 0 - + a/test_db2.py:2: AssertionError - ========== 3 failed, 2 passed, 1 xfailed, 1 error in 0.05 seconds ========== + ======= 3 failed, 2 passed, 1 xfailed, 1 error in 0.12 seconds ======== The two test modules in the ``a`` directory see the same ``db`` fixture instance while the one test in the sister-directory ``b`` doesn't see it. We could of course @@ -563,37 +563,36 @@ and run them:: $ py.test test_module.py - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-162, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 2 items - + test_module.py FF - - ================================= FAILURES ================================= - ________________________________ test_fail1 ________________________________ - - tmpdir = local('/tmp/pytest-22/test_fail10') - + + ======= FAILURES ======== + _______ test_fail1 ________ + + tmpdir = local('/tmp/pytest-NaN/test_fail10') + def test_fail1(tmpdir): > assert 0 E assert 0 - + test_module.py:2: AssertionError - ________________________________ test_fail2 ________________________________ - + _______ test_fail2 ________ + def test_fail2(): > assert 0 E assert 0 - + test_module.py:4: AssertionError - ========================= 2 failed in 0.02 seconds ========================= + ======= 2 failed in 0.12 seconds ======== you will have a "failures" file which contains the failing test ids:: $ cat failures - test_module.py::test_fail1 (/tmp/pytest-22/test_fail10) - test_module.py::test_fail2 + cat: failures: No such file or directory Making test result information available in fixtures ----------------------------------------------------------- @@ -654,42 +653,42 @@ and run it:: $ py.test -s test_module.py - =========================== test session starts ============================ - platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 - rootdir: /tmp/doc-exec-162, inifile: + ======= test session starts ======== + platform linux2 -- Python 2.7.9, pytest-2.8.0.dev4, py-1.4.28, pluggy-0.3.0 + rootdir: $REGENDOC_TMPDIR, inifile: collected 3 items - - test_module.py Esetting up a test failed! test_module.py::test_setup_fails - Fexecuting test failed test_module.py::test_call_fails + + test_module.py E('setting up a test failed!', 'test_module.py::test_setup_fails') + F('executing test failed', 'test_module.py::test_call_fails') F - - ================================== ERRORS ================================== - ____________________ ERROR at setup of test_setup_fails ____________________ - + + ======= ERRORS ======== + _______ ERROR at setup of test_setup_fails ________ + @pytest.fixture def other(): > assert 0 E assert 0 - + test_module.py:6: AssertionError - ================================= FAILURES ================================= - _____________________________ test_call_fails ______________________________ - + ======= FAILURES ======== + _______ test_call_fails ________ + something = None - + def test_call_fails(something): > assert 0 E assert 0 - + test_module.py:12: AssertionError - ________________________________ test_fail2 ________________________________ - + _______ test_fail2 ________ + def test_fail2(): > assert 0 E assert 0 - + test_module.py:15: AssertionError - ==================== 2 failed, 1 error in 0.02 seconds ===================== + ======= 2 failed, 1 warnings, 1 error in 0.12 seconds ======== You'll see that the fixture finalizers could use the precise reporting information. @@ -744,4 +743,4 @@ application, using standard ``py.test`` command-line options:: $ ./app_main --pytest --verbose --tb=long --junit-xml=results.xml test-suite/ - /bin/sh: 1: ./app_main: not found + /bin/sh: ./app_main: No such file or directory diff -r 614bb548b53695323a83069fe6b4584f4a02f65d -r d6e31c7a2be21b51b813c024d5444d5a96e496ea doc/en/example/special.txt --- a/doc/en/example/special.txt +++ b/doc/en/example/special.txt @@ -69,4 +69,4 @@ .test other .test_unit1 method called . - 4 passed in 0.03 seconds + 4 passed in 0.12 seconds This diff is so big that we needed to truncate the remainder. Repository URL: https://bitbucket.org/pytest-dev/pytest/ -- This is a commit notification from bitbucket.org. You are receiving this because you have the service enabled, addressing the recipient of this email. From issues-reply at bitbucket.org Sun Jun 14 00:28:06 2015 From: issues-reply at bitbucket.org (Jason R. Coombs) Date: Sat, 13 Jun 2015 22:28:06 -0000 Subject: [Pytest-commit] Issue #768: doctests appear to run before session scope fixtures (pytest-dev/pytest) Message-ID: <20150613222806.6977.85775@app10.ash-private.bitbucket.org> New issue 768: doctests appear to run before session scope fixtures https://bitbucket.org/pytest-dev/pytest/issue/768/doctests-appear-to-run-before-session Jason R. Coombs: I've created a session scoped fixture in ./conftest.py: ``` @pytest.fixture(autouse=True, scope='session') def myfixture(): raise ValueError() ``` Then, I create a module to be doctested: ``` def foo(): """ >>> assert True """ ``` Then, I run pytest with `--doctest-modules`. It completes one test on "mod.py" and it passes. Furthermore, if anything in the session scope is required by the doctest, it won't yet be setup. It seems that doctests are run outside of the test session. I understand that session scope fixtures [are meant to serve as test setup and teardown](http://stackoverflow.com/a/12600154/70170). However, they seem not to be available for doctests. Is there any way to set up the context for doctests? I'd like to incorporate another fixture from a plugin during setup as well. From commits-noreply at bitbucket.org Mon Jun 15 14:36:30 2015 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Mon, 15 Jun 2015 12:36:30 -0000 Subject: [Pytest-commit] commit/tox: hpk42: Merged in novel/tox/typofix (pull request #164) Message-ID: <20150615123630.31599.4469@app11.ash-private.bitbucket.org> 1 new commit in tox: https://bitbucket.org/hpk42/tox/commits/6d4378ce9062/ Changeset: 6d4378ce9062 User: hpk42 Date: 2015-06-15 12:36:26+00:00 Summary: Merged in novel/tox/typofix (pull request #164) typo fix Affected #: 1 file diff -r 93f262ca702e8d2e1192875ded9eaa584de3aa00 -r 6d4378ce90621105cda12fb2f9dfc67dfbefc7ca tox/config.py --- a/tox/config.py +++ b/tox/config.py @@ -909,7 +909,7 @@ if envkey not in os.environ and default is None: raise tox.exception.ConfigError( - "substitution env:%r: unkown environment variable %r" % + "substitution env:%r: unknown environment variable %r" % (envkey, envkey)) return os.environ.get(envkey, default) Repository URL: https://bitbucket.org/hpk42/tox/ -- This is a commit notification from bitbucket.org. You are receiving this because you have the service enabled, addressing the recipient of this email. From commits-noreply at bitbucket.org Mon Jun 15 14:36:30 2015 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Mon, 15 Jun 2015 12:36:30 -0000 Subject: [Pytest-commit] commit/tox: 3 new changesets Message-ID: <20150615123630.12183.52187@app02.ash-private.bitbucket.org> 3 new commits in tox: https://bitbucket.org/hpk42/tox/commits/637beec44454/ Changeset: 637beec44454 Branch: typofix User: novel Date: 2015-06-15 10:56:00+00:00 Summary: Created new branch typofix Affected #: 0 files https://bitbucket.org/hpk42/tox/commits/a53ade9660a5/ Changeset: a53ade9660a5 Branch: typofix User: novel Date: 2015-06-15 10:07:26+00:00 Summary: Fix typo in exception string s/unkown/unknown/ Affected #: 1 file diff -r 637beec444544326abad093437a37433377a317a -r a53ade9660a5595bd06444e5b80b9e92251d3779 tox/config.py --- a/tox/config.py +++ b/tox/config.py @@ -909,7 +909,7 @@ if envkey not in os.environ and default is None: raise tox.exception.ConfigError( - "substitution env:%r: unkown environment variable %r" % + "substitution env:%r: unknown environment variable %r" % (envkey, envkey)) return os.environ.get(envkey, default) https://bitbucket.org/hpk42/tox/commits/6d4378ce9062/ Changeset: 6d4378ce9062 User: hpk42 Date: 2015-06-15 12:36:26+00:00 Summary: Merged in novel/tox/typofix (pull request #164) typo fix Affected #: 1 file diff -r 93f262ca702e8d2e1192875ded9eaa584de3aa00 -r 6d4378ce90621105cda12fb2f9dfc67dfbefc7ca tox/config.py --- a/tox/config.py +++ b/tox/config.py @@ -909,7 +909,7 @@ if envkey not in os.environ and default is None: raise tox.exception.ConfigError( - "substitution env:%r: unkown environment variable %r" % + "substitution env:%r: unknown environment variable %r" % (envkey, envkey)) return os.environ.get(envkey, default) Repository URL: https://bitbucket.org/hpk42/tox/ -- This is a commit notification from bitbucket.org. You are receiving this because you have the service enabled, addressing the recipient of this email. From commits-noreply at bitbucket.org Mon Jun 15 23:04:30 2015 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Mon, 15 Jun 2015 21:04:30 -0000 Subject: [Pytest-commit] commit/pytest: nicoddemus: Update README: moved to GitHub Message-ID: <20150615210430.14818.9494@app08.ash-private.bitbucket.org> 1 new commit in pytest: https://bitbucket.org/pytest-dev/pytest/commits/57ed8380ee5e/ Changeset: 57ed8380ee5e User: nicoddemus Date: 2015-06-15 21:04:02+00:00 Summary: Update README: moved to GitHub Affected #: 1 file diff -r d6e31c7a2be21b51b813c024d5444d5a96e496ea -r 57ed8380ee5ea5c624d144883d3691043013dd22 README.rst --- a/README.rst +++ b/README.rst @@ -1,53 +1,4 @@ -.. image:: https://pypip.in/v/pytest/badge.png - :target: https://pypi.python.org/pypi/pytest +The official pytest repository has been moved to GitHub: -Documentation: http://pytest.org/latest/ +https://github.com/pytest-dev/pytest -Changelog: http://pytest.org/latest/changelog.html - -Issues: https://bitbucket.org/pytest-dev/pytest/issues?status=open - -CI: https://drone.io/bitbucket.org/pytest-dev/pytest - -The ``pytest`` testing tool makes it easy to write small tests, yet -scales to support complex functional testing. It provides - -- `auto-discovery - `_ - of test modules and functions, -- detailed info on failing `assert statements `_ (no need to remember ``self.assert*`` names) -- `modular fixtures `_ for - managing small or parametrized long-lived test resources. -- multi-paradigm support: you can use ``pytest`` to run test suites based - on `unittest `_ (or trial), - `nose `_ -- single-source compatibility from Python2.6 all the way up to - Python3.4, PyPy-2.3, (jython-2.5 untested) - - -- many `external plugins `_. - -A simple example for a test:: - - # content of test_module.py - def test_function(): - i = 4 - assert i == 3 - -which can be run with ``py.test test_module.py``. See `getting-started `_ for more examples. - -For much more info, including PDF docs, see - - http://pytest.org - -and report bugs at: - - http://bitbucket.org/pytest-dev/pytest/issues/ - -and checkout or fork repo at: - - http://bitbucket.org/pytest-dev/pytest/ - - -Copyright Holger Krekel and others, 2004-2014 -Licensed under the MIT license. Repository URL: https://bitbucket.org/pytest-dev/pytest/ -- This is a commit notification from bitbucket.org. You are receiving this because you have the service enabled, addressing the recipient of this email. From issues-reply at bitbucket.org Tue Jun 16 01:21:34 2015 From: issues-reply at bitbucket.org (Bruno Oliveira) Date: Mon, 15 Jun 2015 23:21:34 -0000 Subject: [Pytest-commit] Issue #769: IMPORTANT: pytest development has moved to GitHub (pytest-dev/pytest) Message-ID: <20150615232134.17158.2153@app03.ash-private.bitbucket.org> New issue 769: IMPORTANT: pytest development has moved to GitHub https://bitbucket.org/pytest-dev/pytest/issue/769/important-pytest-development-has-moved-to Bruno Oliveira: Please post new issues at https://github.com/pytest-dev/pytest/issues From issues-reply at bitbucket.org Tue Jun 16 17:33:32 2015 From: issues-reply at bitbucket.org (stefano-m) Date: Tue, 16 Jun 2015 15:33:32 -0000 Subject: [Pytest-commit] Issue #257: Shell commands fail on WIndows due to popen shell=False in Action._popen (hpk42/tox) Message-ID: <20150616153332.30002.50650@app02.ash-private.bitbucket.org> New issue 257: Shell commands fail on WIndows due to popen shell=False in Action._popen https://bitbucket.org/hpk42/tox/issue/257/shell-commands-fail-on-windows-due-to stefano-m: Hello, I am using tox 2.0.2 and Python 2.7 on Windows 7. As part of my tox tests, I need to run gulp and other npm-related commands. To do so, I first run npm install and then e.g. run gulp that has been installed in the node_modules directory. The relevant part of my tox.ini looks like [testenv] whitelist_externals = npm commands = npm install {toxinidir}/node_modules/.bin/gulp When I run tox, I get the following error when gulp is run: py27-win32 runtests: commands[0] | npm install py27-win32 runtests: commands[1] | F:\my_project\my_project/node_modules/.bin/gulp ERROR: invocation failed (errno 8), args: ['F:\\my_project\\my_project/node_modules/.bin/gulp'], cwd: F:\my_project\my_project Traceback (most recent call last): File "C:\Python27\lib\runpy.py", line 162, in _run_module_as_main "__main__", fname, loader, pkg_name) File "C:\Python27\lib\runpy.py", line 72, in _run_code exec code in run_globals File "C:\Python27\Scripts\tox.exe\__main__.py", line 9, in File "C:\Python27\lib\site-packages\tox\session.py", line 39, in main retcode = Session(config).runcommand() File "C:\Python27\lib\site-packages\tox\session.py", line 367, in runcommand return self.subcommand_test() File "C:\Python27\lib\site-packages\tox\session.py", line 540, in subcommand_test self.runtestenv(venv) File "C:\Python27\lib\site-packages\tox\session.py", line 548, in runtestenv venv.test(redirect=redirect) File "C:\Python27\lib\site-packages\tox\venv.py", line 360, in test ignore_ret=ignore_ret) File "C:\Python27\lib\site-packages\tox\venv.py", line 384, in _pcall return action.popen(args, cwd=cwd, env=env, redirect=redirect, ignore_ret=ignore_ret) File "C:\Python27\lib\site-packages\tox\session.py", line 130, in popen stdout=stdout, stderr=STDOUT) File "C:\Python27\lib\site-packages\tox\session.py", line 218, in _popen stdout=stdout, stderr=stderr, env=env) File "C:\Python27\lib\subprocess.py", line 710, in __init__ errread, errwrite) File "C:\Python27\lib\subprocess.py", line 958, in _execute_child startupinfo) WindowsError: [Error 193] %1 is not a valid Win32 application The same configuration works OK on Linux. I think that the problem lies in `Action._popen` in the `session.py` module, where `self.session.popen` is called (line 216). popen is run with the `shell` keyword argument set to `False`, and causes the command to fall over on Windows. Since I had already noticed that problem (on Windows only) in other places where I use `subprocess`, I tried to change the value to `True` by doing return self.session.popen(args, shell=(sys.platform == 'win32'), cwd=str(cwd), universal_newlines=True, stdout=stdout, stderr=stderr, env=env) and that solved the issue (while keeping the default value to `False` on non-Windows platforms). From issues-reply at bitbucket.org Wed Jun 17 16:33:53 2015 From: issues-reply at bitbucket.org (Federico Ressi) Date: Wed, 17 Jun 2015 14:33:53 -0000 Subject: [Pytest-commit] Issue #258: Pass through *_proxy environment variables by default (hpk42/tox) Message-ID: <20150617143353.28094.57731@app11.ash-private.bitbucket.org> New issue 258: Pass through *_proxy environment variables by default https://bitbucket.org/hpk42/tox/issue/258/pass-through-_proxy-environment-variables Federico Ressi: Inside corporations is quite common being working beside a proxy. In Linux some applications (like pip) takes proxy configuration from following environment variables: http_proxy https_proxy no_proxy HTTP_PROXY HTTPS_PROXY NO_PROXY Being variables of the system and quite standard and not being specific of projects, I think they should be passed through by default. From issues-reply at bitbucket.org Wed Jun 17 16:42:48 2015 From: issues-reply at bitbucket.org (Federico Ressi) Date: Wed, 17 Jun 2015 14:42:48 -0000 Subject: [Pytest-commit] Issue #259: passenv statement should accept multi-line list (hpk42/tox) Message-ID: <20150617144248.6960.13497@app09.ash-private.bitbucket.org> New issue 259: passenv statement should accept multi-line list https://bitbucket.org/hpk42/tox/issue/259/passenv-statement-should-accept-multi-line Federico Ressi: It would be useful to insert a large list of environment variables to 'passenv' instead of inserting a single long line. From commits-noreply at bitbucket.org Thu Jun 18 16:34:45 2015 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Thu, 18 Jun 2015 14:34:45 -0000 Subject: [Pytest-commit] commit/tox: 3 new changesets Message-ID: <20150618143445.26502.76193@app10.ash-private.bitbucket.org> 3 new commits in tox: https://bitbucket.org/hpk42/tox/commits/521459223e52/ Changeset: 521459223e52 User: hpk42 Date: 2015-06-18 14:07:12+00:00 Summary: allow all env variables during installation of dependencies Affected #: 7 files diff -r 6d4378ce90621105cda12fb2f9dfc67dfbefc7ca -r 521459223e524337d187ef07277f0ee1485cd313 CHANGELOG --- a/CHANGELOG +++ b/CHANGELOG @@ -1,3 +1,9 @@ +2.1.0 +---------- + +- fix issue258, fix issue248, fix issue253: for non-test commands + (installation, venv creation) we pass in the full invocation environment. + 2.0.2 ---------- diff -r 6d4378ce90621105cda12fb2f9dfc67dfbefc7ca -r 521459223e524337d187ef07277f0ee1485cd313 doc/config.txt --- a/doc/config.txt +++ b/doc/config.txt @@ -195,14 +195,16 @@ A list of wildcard environment variable names which shall be copied from the tox invocation environment to the test - environment. If a specified environment variable doesn't exist in the tox - invocation environment it is ignored. You can use ``*`` and ``?`` to - match multiple environment variables with one name. + environment when executing test commands. If a specified environment + variable doesn't exist in the tox invocation environment it is ignored. + You can use ``*`` and ``?`` to match multiple environment variables with + one name. - Note that the ``PATH`` and ``PIP_INDEX_URL`` variables are unconditionally - passed down and on Windows ``SYSTEMROOT``, ``PATHEXT``, ``TEMP`` and ``TMP`` - will be passed down as well whereas on unix ``TMPDIR`` will be passed down. - You can override these variables with the ``setenv`` option. + Note that the ``PATH``, ``LANG`` and ``PIP_INDEX_URL`` variables are + unconditionally passed down and on Windows ``SYSTEMROOT``, ``PATHEXT``, + ``TEMP`` and ``TMP`` will be passed down as well whereas on unix + ``TMPDIR`` will be passed down. You can override these variables + with the ``setenv`` option. .. confval:: recreate=True|False(default) diff -r 6d4378ce90621105cda12fb2f9dfc67dfbefc7ca -r 521459223e524337d187ef07277f0ee1485cd313 setup.py --- a/setup.py +++ b/setup.py @@ -48,7 +48,7 @@ description='virtualenv-based automation of test activities', long_description=open("README.rst").read(), url='http://tox.testrun.org/', - version='2.0.2', + version='2.1.0.dev1', license='http://opensource.org/licenses/MIT', platforms=['unix', 'linux', 'osx', 'cygwin', 'win32'], author='holger krekel', diff -r 6d4378ce90621105cda12fb2f9dfc67dfbefc7ca -r 521459223e524337d187ef07277f0ee1485cd313 tests/test_venv.py --- a/tests/test_venv.py +++ b/tests/test_venv.py @@ -510,6 +510,7 @@ def test_env_variables_added_to_pcall(tmpdir, mocksession, newconfig, monkeypatch): pkg = tmpdir.ensure("package.tar.gz") monkeypatch.setenv("X123", "123") + monkeypatch.setenv("YY", "456") config = newconfig([], """ [testenv:python] commands=python -V @@ -533,9 +534,12 @@ assert env['ENV_VAR'] == 'value' assert env['VIRTUAL_ENV'] == str(venv.path) assert env['X123'] == "123" + # all env variables are passed for installation + assert l[0].env["YY"] == "456" + assert "YY" not in l[1].env assert set(["ENV_VAR", "VIRTUAL_ENV", "PYTHONHASHSEED", "X123", "PATH"])\ - .issubset(env) + .issubset(l[1].env) # for e in os.environ: # assert e in env diff -r 6d4378ce90621105cda12fb2f9dfc67dfbefc7ca -r 521459223e524337d187ef07277f0ee1485cd313 tox/__init__.py --- a/tox/__init__.py +++ b/tox/__init__.py @@ -1,5 +1,5 @@ # -__version__ = '2.0.2' +__version__ = '2.1.0.dev1' from .hookspecs import hookspec, hookimpl # noqa diff -r 6d4378ce90621105cda12fb2f9dfc67dfbefc7ca -r 521459223e524337d187ef07277f0ee1485cd313 tox/config.py --- a/tox/config.py +++ b/tox/config.py @@ -409,8 +409,11 @@ parser.add_testenv_attribute( name="passenv", type="space-separated-list", postprocess=passenv, - help="environment variables names which shall be passed " - "from tox invocation to test environment when executing commands.") + help="environment variables needed during executing test commands " + "(taken from invocation environment). Not that tox always " + "passes in some basic environment variables which are needed for " + "basic functioning of the Python interpreter. See --showconfig " + "for the resulting passenv setting.") parser.add_testenv_attribute( name="whitelist_externals", type="line-list", diff -r 6d4378ce90621105cda12fb2f9dfc67dfbefc7ca -r 521459223e524337d187ef07277f0ee1485cd313 tox/venv.py --- a/tox/venv.py +++ b/tox/venv.py @@ -317,16 +317,22 @@ action=action, extraenv=extraenv) def _getenv(self, extraenv={}): - env = {} - for envname in self.envconfig.passenv: - if envname in os.environ: - env[envname] = os.environ[envname] + if extraenv is None: + # for executing tests + env = {} + for envname in self.envconfig.passenv: + if envname in os.environ: + env[envname] = os.environ[envname] + else: + # for executing install commands + env = os.environ.copy() env.update(self.envconfig.setenv) env['VIRTUAL_ENV'] = str(self.path) + if extraenv is not None: + env.update(extraenv) - env.update(extraenv) return env def test(self, redirect=False): @@ -357,7 +363,7 @@ try: self._pcall(argv, cwd=cwd, action=action, redirect=redirect, - ignore_ret=ignore_ret) + ignore_ret=ignore_ret, extraenv=None) except tox.exception.InvocationError as err: self.session.report.error(str(err)) self.status = "commands failed" https://bitbucket.org/hpk42/tox/commits/4091d0ea77e8/ Changeset: 4091d0ea77e8 User: hpk42 Date: 2015-06-18 14:07:13+00:00 Summary: remove --set-home option which probably nobody used and was hackily implemented Affected #: 4 files diff -r 521459223e524337d187ef07277f0ee1485cd313 -r 4091d0ea77e84887031b2fb34050c5da8d8bc252 CHANGELOG --- a/CHANGELOG +++ b/CHANGELOG @@ -4,6 +4,11 @@ - fix issue258, fix issue248, fix issue253: for non-test commands (installation, venv creation) we pass in the full invocation environment. +- remove experimental --set-home option which was hardly used and + hackily implemented (if people want home-directory isolation we should + figure out a better way to do it, possibly through a plugin) + + 2.0.2 ---------- diff -r 521459223e524337d187ef07277f0ee1485cd313 -r 4091d0ea77e84887031b2fb34050c5da8d8bc252 tests/test_venv.py --- a/tests/test_venv.py +++ b/tests/test_venv.py @@ -619,48 +619,3 @@ assert x4.endswith(os.sep + 'x') mocksession.report.expect("warning", "*test command found but not*") - -def test_sethome_only_on_option(newmocksession, monkeypatch): - mocksession = newmocksession([], "") - venv = mocksession.getenv('python') - action = mocksession.newaction(venv, "qwe", []) - monkeypatch.setattr(tox.venv, "hack_home_env", None) - venv._install(["x"], action=action) - - -def test_sethome_works_on_option(newmocksession, monkeypatch): - mocksession = newmocksession(["--set-home", "-i ALL=http://qwe"], "") - venv = mocksession.getenv('python') - action = mocksession.newaction(venv, "qwe", []) - venv._install(["x"], action=action) - _, mocked = mocksession.report.getnext("logpopen") - p = mocked.env["HOME"] - pydist = py.path.local(p).join(".pydistutils.cfg") - assert "http://qwe" in pydist.read() - - -def test_hack_home_env(tmpdir): - from tox.venv import hack_home_env - env = hack_home_env(tmpdir, "http://index") - assert env["HOME"] == str(tmpdir) - assert env["PIP_INDEX_URL"] == "http://index" - assert "index_url = http://index" in \ - tmpdir.join(".pydistutils.cfg").read() - tmpdir.remove() - env = hack_home_env(tmpdir, None) - assert env["HOME"] == str(tmpdir) - assert not tmpdir.join(".pydistutils.cfg").check() - assert "PIP_INDEX_URL" not in env - - -def test_hack_home_env_passthrough(tmpdir, monkeypatch): - from tox.venv import hack_home_env - env = hack_home_env(tmpdir, "http://index") - monkeypatch.setattr(os, "environ", env) - - tmpdir = tmpdir.mkdir("tmpdir2") - env2 = hack_home_env(tmpdir) - assert env2["HOME"] == str(tmpdir) - assert env2["PIP_INDEX_URL"] == "http://index" - assert "index_url = http://index" in \ - tmpdir.join(".pydistutils.cfg").read() diff -r 521459223e524337d187ef07277f0ee1485cd313 -r 4091d0ea77e84887031b2fb34050c5da8d8bc252 tox/config.py --- a/tox/config.py +++ b/tox/config.py @@ -283,10 +283,6 @@ parser.add_argument("--develop", action="store_true", dest="develop", help="install package in the venv using 'setup.py develop' via " "'pip -e .'") - parser.add_argument("--set-home", action="store_true", dest="sethome", - help="(experimental) force creating a new $HOME for each test " - "environment and create .pydistutils.cfg|pip.conf files " - "if index servers are specified with tox. ") parser.add_argument('-i', action="append", dest="indexurl", metavar="URL", help="set indexserver url (if URL is of form name=url set the " @@ -410,10 +406,10 @@ parser.add_testenv_attribute( name="passenv", type="space-separated-list", postprocess=passenv, help="environment variables needed during executing test commands " - "(taken from invocation environment). Not that tox always " - "passes in some basic environment variables which are needed for " - "basic functioning of the Python interpreter. See --showconfig " - "for the resulting passenv setting.") + "(taken from invocation environment). Note that tox always " + "passes through some basic environment variables which are " + "needed for basic functioning of the Python system. " + "See --showconfig for the eventual passenv setting.") parser.add_testenv_attribute( name="whitelist_externals", type="line-list", diff -r 521459223e524337d187ef07277f0ee1485cd313 -r 4091d0ea77e84887031b2fb34050c5da8d8bc252 tox/venv.py --- a/tox/venv.py +++ b/tox/venv.py @@ -259,9 +259,7 @@ l.append("--pre") return l - def run_install_command(self, packages, options=(), - indexserver=None, action=None, - extraenv=None): + def run_install_command(self, packages, options=(), action=None): argv = self.envconfig.install_command[:] # use pip-script on win32 to avoid the executable locking i = argv.index('{packages}') @@ -277,10 +275,7 @@ pass old_stdout = sys.stdout sys.stdout = codecs.getwriter('utf8')(sys.stdout) - if extraenv is None: - extraenv = {} - self._pcall(argv, cwd=self.envconfig.config.toxinidir, - extraenv=extraenv, action=action) + self._pcall(argv, cwd=self.envconfig.config.toxinidir, action=action) sys.stdout = old_stdout def _install(self, deps, extraopts=None, action=None): @@ -302,31 +297,26 @@ assert ixserver.url is None or isinstance(ixserver.url, str) for ixserver in l: - if self.envconfig.config.option.sethome: - extraenv = hack_home_env( - homedir=self.envconfig.envtmpdir.join("pseudo-home"), - index_url=ixserver.url) - else: - extraenv = {} - packages = d[ixserver] options = self._installopts(ixserver.url) if extraopts: options.extend(extraopts) self.run_install_command(packages=packages, options=options, - action=action, extraenv=extraenv) + action=action) def _getenv(self, extraenv={}): if extraenv is None: - # for executing tests + # for executing tests we construct a clean environment env = {} for envname in self.envconfig.passenv: if envname in os.environ: env[envname] = os.environ[envname] else: - # for executing install commands + # for executing non-test commands we use the full + # invocation environment env = os.environ.copy() + # in any case we honor per-testenv setenv configuration env.update(self.envconfig.setenv) env['VIRTUAL_ENV'] = str(self.path) @@ -405,24 +395,3 @@ return "0" * 32 return path.computehash() - -def hack_home_env(homedir, index_url=None): - # XXX HACK (this could also live with tox itself, consider) - # if tox uses pip on a package that requires setup_requires - # the index url set with pip is usually not recognized - # because it is setuptools executing very early. - # We therefore run the tox command in an artifical home - # directory and set .pydistutils.cfg and pip.conf files - # accordingly. - if not homedir.check(): - homedir.ensure(dir=1) - d = dict(HOME=str(homedir)) - if not index_url: - index_url = os.environ.get("TOX_INDEX_URL") - if index_url: - homedir.join(".pydistutils.cfg").write( - "[easy_install]\n" - "index_url = %s\n" % index_url) - d["PIP_INDEX_URL"] = index_url - d["TOX_INDEX_URL"] = index_url - return d https://bitbucket.org/hpk42/tox/commits/673d3f1f8d09/ Changeset: 673d3f1f8d09 User: hpk42 Date: 2015-06-18 14:07:14+00:00 Summary: cleanup internal env variable code a bit Affected #: 3 files diff -r 4091d0ea77e84887031b2fb34050c5da8d8bc252 -r 673d3f1f8d095cd8e1733506ffdc2f162b93e59b tests/test_venv.py --- a/tests/test_venv.py +++ b/tests/test_venv.py @@ -618,4 +618,3 @@ x4 = venv.getcommandpath("x", cwd=tmpdir) assert x4.endswith(os.sep + 'x') mocksession.report.expect("warning", "*test command found but not*") - diff -r 4091d0ea77e84887031b2fb34050c5da8d8bc252 -r 673d3f1f8d095cd8e1733506ffdc2f162b93e59b tox/config.py --- a/tox/config.py +++ b/tox/config.py @@ -295,11 +295,9 @@ dest="recreate", help="force recreation of virtual environments") parser.add_argument("--result-json", action="store", - dest="resultjson", metavar="PATH", - help="write a json file with detailed information about " - "all commands and results involved. This will turn off " - "pass-through output from running test commands which is " - "instead captured into the json result file.") + dest="resultjson", metavar="PATH", + help="write a json file with detailed information about " + "all commands and results involved.") # We choose 1 to 4294967295 because it is the range of PYTHONHASHSEED. parser.add_argument("--hashseed", action="store", diff -r 4091d0ea77e84887031b2fb34050c5da8d8bc252 -r 673d3f1f8d095cd8e1733506ffdc2f162b93e59b tox/venv.py --- a/tox/venv.py +++ b/tox/venv.py @@ -267,12 +267,11 @@ if '{opts}' in argv: i = argv.index('{opts}') argv[i:i + 1] = list(options) + for x in ('PIP_RESPECT_VIRTUALENV', 'PIP_REQUIRE_VIRTUALENV', '__PYVENV_LAUNCHER__'): - try: - del os.environ[x] - except KeyError: - pass + os.environ.pop(x, None) + old_stdout = sys.stdout sys.stdout = codecs.getwriter('utf8')(sys.stdout) self._pcall(argv, cwd=self.envconfig.config.toxinidir, action=action) @@ -304,8 +303,8 @@ self.run_install_command(packages=packages, options=options, action=action) - def _getenv(self, extraenv={}): - if extraenv is None: + def _getenv(self, testcommand=False): + if testcommand: # for executing tests we construct a clean environment env = {} for envname in self.envconfig.passenv: @@ -320,9 +319,6 @@ env.update(self.envconfig.setenv) env['VIRTUAL_ENV'] = str(self.path) - if extraenv is not None: - env.update(extraenv) - return env def test(self, redirect=False): @@ -331,7 +327,7 @@ self.status = 0 self.session.make_emptydir(self.envconfig.envtmpdir) cwd = self.envconfig.changedir - env = self._getenv() + env = self._getenv(testcommand=True) # Display PYTHONHASHSEED to assist with reproducibility. action.setactivity("runtests", "PYTHONHASHSEED=%r" % env.get('PYTHONHASHSEED')) for i, argv in enumerate(self.envconfig.commands): @@ -353,7 +349,7 @@ try: self._pcall(argv, cwd=cwd, action=action, redirect=redirect, - ignore_ret=ignore_ret, extraenv=None) + ignore_ret=ignore_ret, testcommand=True) except tox.exception.InvocationError as err: self.session.report.error(str(err)) self.status = "commands failed" @@ -364,20 +360,18 @@ self.session.report.error(self.status) raise - def _pcall(self, args, venv=True, cwd=None, extraenv={}, + def _pcall(self, args, cwd, venv=True, testcommand=False, action=None, redirect=True, ignore_ret=False): for name in ("VIRTUALENV_PYTHON", "PYTHONDONTWRITEBYTECODE"): - try: - del os.environ[name] - except KeyError: - pass - assert cwd + os.environ.pop(name, None) + cwd.ensure(dir=1) old = self.patchPATH() try: args[0] = self.getcommandpath(args[0], venv, cwd) - env = self._getenv(extraenv) - return action.popen(args, cwd=cwd, env=env, redirect=redirect, ignore_ret=ignore_ret) + env = self._getenv(testcommand=testcommand) + return action.popen(args, cwd=cwd, env=env, + redirect=redirect, ignore_ret=ignore_ret) finally: os.environ['PATH'] = old @@ -394,4 +388,3 @@ if not path.check(file=1): return "0" * 32 return path.computehash() - Repository URL: https://bitbucket.org/hpk42/tox/ -- This is a commit notification from bitbucket.org. You are receiving this because you have the service enabled, addressing the recipient of this email. From commits-noreply at bitbucket.org Fri Jun 19 10:59:44 2015 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Fri, 19 Jun 2015 08:59:44 -0000 Subject: [Pytest-commit] commit/tox: 4 new changesets Message-ID: <20150619085944.4971.21825@app05.ash-private.bitbucket.org> 4 new commits in tox: https://bitbucket.org/hpk42/tox/commits/f1522d2de5f0/ Changeset: f1522d2de5f0 Branch: typofix User: hpk42 Date: 2015-06-19 08:59:32+00:00 Summary: close branch Affected #: 0 files https://bitbucket.org/hpk42/tox/commits/7febf35f237e/ Changeset: 7febf35f237e Branch: maquilina/fix-minor-grammatical-error-in-readmerst-1431506151478 User: hpk42 Date: 2015-06-19 08:59:32+00:00 Summary: close branch Affected #: 0 files https://bitbucket.org/hpk42/tox/commits/8f795e1a5fe2/ Changeset: 8f795e1a5fe2 Branch: stevepiercy/insert-missing-the-for-grammar-rewrap-to-1431248739542 User: hpk42 Date: 2015-06-19 08:59:33+00:00 Summary: close branch Affected #: 0 files https://bitbucket.org/hpk42/tox/commits/cf9b51268376/ Changeset: cf9b51268376 Branch: pluggy User: hpk42 Date: 2015-06-19 08:59:33+00:00 Summary: close branch Affected #: 0 files Repository URL: https://bitbucket.org/hpk42/tox/ -- This is a commit notification from bitbucket.org. You are receiving this because you have the service enabled, addressing the recipient of this email. From commits-noreply at bitbucket.org Fri Jun 19 11:10:20 2015 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Fri, 19 Jun 2015 09:10:20 -0000 Subject: [Pytest-commit] commit/tox: hpk42: Merged in stefano-m/tox/passenv_multiline (pull request #166) Message-ID: <20150619091020.10412.12569@app11.ash-private.bitbucket.org> 1 new commit in tox: https://bitbucket.org/hpk42/tox/commits/37c6dac2b848/ Changeset: 37c6dac2b848 User: hpk42 Date: 2015-06-19 09:10:16+00:00 Summary: Merged in stefano-m/tox/passenv_multiline (pull request #166) Issue #259 passenv statement should accept multi-line list Affected #: 2 files diff -r 673d3f1f8d095cd8e1733506ffdc2f162b93e59b -r 37c6dac2b8484bb084dd6fff81fde108e5b96a4c tests/test_config.py --- a/tests/test_config.py +++ b/tests/test_config.py @@ -696,14 +696,45 @@ assert envconfig.setenv['ANOTHER_VAL'] == 'else' @pytest.mark.parametrize("plat", ["win32", "linux2"]) - def test_passenv(self, tmpdir, newconfig, monkeypatch, plat): + def test_passenv_as_multiline_list(self, tmpdir, newconfig, monkeypatch, plat): monkeypatch.setattr(sys, "platform", plat) monkeypatch.setenv("A123A", "a") monkeypatch.setenv("A123B", "b") monkeypatch.setenv("BX23", "0") config = newconfig(""" [testenv] - passenv = A123* B?23 + passenv = + A123* + # isolated comment + B?23 + """) + assert len(config.envconfigs) == 1 + envconfig = config.envconfigs['python'] + if plat == "win32": + assert "PATHEXT" in envconfig.passenv + assert "SYSTEMDRIVE" in envconfig.passenv + assert "SYSTEMROOT" in envconfig.passenv + assert "TEMP" in envconfig.passenv + assert "TMP" in envconfig.passenv + else: + assert "TMPDIR" in envconfig.passenv + assert "PATH" in envconfig.passenv + assert "PIP_INDEX_URL" in envconfig.passenv + assert "LANG" in envconfig.passenv + assert "A123A" in envconfig.passenv + assert "A123B" in envconfig.passenv + + @pytest.mark.parametrize("plat", ["win32", "linux2"]) + def test_passenv_as_space_separated_list(self, tmpdir, newconfig, monkeypatch, plat): + monkeypatch.setattr(sys, "platform", plat) + monkeypatch.setenv("A123A", "a") + monkeypatch.setenv("A123B", "b") + monkeypatch.setenv("BX23", "0") + config = newconfig(""" + [testenv] + passenv = + # comment + A123* B?23 """) assert len(config.envconfigs) == 1 envconfig = config.envconfigs['python'] @@ -724,20 +755,39 @@ def test_passenv_with_factor(self, tmpdir, newconfig, monkeypatch): monkeypatch.setenv("A123A", "a") monkeypatch.setenv("A123B", "b") + monkeypatch.setenv("A123C", "c") + monkeypatch.setenv("A123D", "d") monkeypatch.setenv("BX23", "0") + monkeypatch.setenv("CCA43", "3") + monkeypatch.setenv("CB21", "4") config = newconfig(""" [tox] envlist = {x1,x2} [testenv] passenv = - x1: A123A - x2: A123B + x1: A123A CC* + x1: CB21 + # passed to both environments + A123C + x2: A123B A123D """) assert len(config.envconfigs) == 2 + assert "A123A" in config.envconfigs["x1"].passenv + assert "A123C" in config.envconfigs["x1"].passenv + assert "CCA43" in config.envconfigs["x1"].passenv + assert "CB21" in config.envconfigs["x1"].passenv assert "A123B" not in config.envconfigs["x1"].passenv + assert "A123D" not in config.envconfigs["x1"].passenv + assert "BX23" not in config.envconfigs["x1"].passenv + assert "A123B" in config.envconfigs["x2"].passenv + assert "A123D" in config.envconfigs["x2"].passenv assert "A123A" not in config.envconfigs["x2"].passenv + assert "A123C" in config.envconfigs["x2"].passenv + assert "CCA43" not in config.envconfigs["x2"].passenv + assert "CB21" not in config.envconfigs["x2"].passenv + assert "BX23" not in config.envconfigs["x2"].passenv def test_changedir_override(self, tmpdir, newconfig): config = newconfig(""" diff -r 673d3f1f8d095cd8e1733506ffdc2f162b93e59b -r 37c6dac2b8484bb084dd6fff81fde108e5b96a4c tox/config.py --- a/tox/config.py +++ b/tox/config.py @@ -381,6 +381,11 @@ help="list of X=Y lines with environment variable settings") def passenv(testenv_config, value): + # Flatten the list to deal with space-separated values. + value = list( + itertools.chain.from_iterable( + [x.split(' ') for x in value])) + passenv = set(["PATH", "PIP_INDEX_URL", "LANG"]) # we ensure that tmp directory settings are passed on @@ -402,7 +407,7 @@ return passenv parser.add_testenv_attribute( - name="passenv", type="space-separated-list", postprocess=passenv, + name="passenv", type="line-list", postprocess=passenv, help="environment variables needed during executing test commands " "(taken from invocation environment). Note that tox always " "passes through some basic environment variables which are " Repository URL: https://bitbucket.org/hpk42/tox/ -- This is a commit notification from bitbucket.org. You are receiving this because you have the service enabled, addressing the recipient of this email. From commits-noreply at bitbucket.org Fri Jun 19 11:10:20 2015 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Fri, 19 Jun 2015 09:10:20 -0000 Subject: [Pytest-commit] commit/tox: 4 new changesets Message-ID: <20150619091020.26015.74696@app05.ash-private.bitbucket.org> 4 new commits in tox: https://bitbucket.org/hpk42/tox/commits/438a7c640081/ Changeset: 438a7c640081 Branch: passenv_multiline User: mazzucco Date: 2015-06-18 09:50:48+00:00 Summary: hpk42/tox/issue/259/passenv-statement-should-accept-multi-line Affected #: 2 files diff -r 6d4378ce90621105cda12fb2f9dfc67dfbefc7ca -r 438a7c640081598e9d97d100fcbf5122c327e04f tests/test_config.py --- a/tests/test_config.py +++ b/tests/test_config.py @@ -696,14 +696,44 @@ assert envconfig.setenv['ANOTHER_VAL'] == 'else' @pytest.mark.parametrize("plat", ["win32", "linux2"]) - def test_passenv(self, tmpdir, newconfig, monkeypatch, plat): + def test_passenv_as_multiline_list(self, tmpdir, newconfig, monkeypatch, plat): monkeypatch.setattr(sys, "platform", plat) monkeypatch.setenv("A123A", "a") monkeypatch.setenv("A123B", "b") monkeypatch.setenv("BX23", "0") config = newconfig(""" [testenv] - passenv = A123* B?23 + passenv = + A123* + # isolated comment + B?23 + """) + assert len(config.envconfigs) == 1 + envconfig = config.envconfigs['python'] + if plat == "win32": + assert "PATHEXT" in envconfig.passenv + assert "SYSTEMDRIVE" in envconfig.passenv + assert "SYSTEMROOT" in envconfig.passenv + assert "TEMP" in envconfig.passenv + assert "TMP" in envconfig.passenv + else: + assert "TMPDIR" in envconfig.passenv + assert "PATH" in envconfig.passenv + assert "PIP_INDEX_URL" in envconfig.passenv + assert "LANG" in envconfig.passenv + assert "A123A" in envconfig.passenv + assert "A123B" in envconfig.passenv + + @pytest.mark.parametrize("plat", ["win32", "linux2"]) + def test_passenv_as_space_separated_list(self, tmpdir, newconfig, monkeypatch, plat): + monkeypatch.setattr(sys, "platform", plat) + monkeypatch.setenv("A123A", "a") + monkeypatch.setenv("A123B", "b") + monkeypatch.setenv("BX23", "0") + config = newconfig(""" + [testenv] + passenv = + A123* B?23 """) assert len(config.envconfigs) == 1 envconfig = config.envconfigs['python'] diff -r 6d4378ce90621105cda12fb2f9dfc67dfbefc7ca -r 438a7c640081598e9d97d100fcbf5122c327e04f tox/config.py --- a/tox/config.py +++ b/tox/config.py @@ -387,6 +387,11 @@ help="list of X=Y lines with environment variable settings") def passenv(testenv_config, value): + if len(value) == 1 and "\n" in value[0]: + # If we have a list of 1 element that contains new lines, + # passenv has been specified as a multi line list. + value = value[0].split("\n") + passenv = set(["PATH", "PIP_INDEX_URL", "LANG"]) # we ensure that tmp directory settings are passed on https://bitbucket.org/hpk42/tox/commits/84af17ba7280/ Changeset: 84af17ba7280 Branch: passenv_multiline User: stefano-m Date: 2015-06-18 20:15:04+00:00 Summary: merge with default Affected #: 7 files diff -r 438a7c640081598e9d97d100fcbf5122c327e04f -r 84af17ba72801f740b12663e6f4cc56cbbb404eb CHANGELOG --- a/CHANGELOG +++ b/CHANGELOG @@ -1,3 +1,14 @@ +2.1.0 +---------- + +- fix issue258, fix issue248, fix issue253: for non-test commands + (installation, venv creation) we pass in the full invocation environment. + +- remove experimental --set-home option which was hardly used and + hackily implemented (if people want home-directory isolation we should + figure out a better way to do it, possibly through a plugin) + + 2.0.2 ---------- diff -r 438a7c640081598e9d97d100fcbf5122c327e04f -r 84af17ba72801f740b12663e6f4cc56cbbb404eb doc/config.txt --- a/doc/config.txt +++ b/doc/config.txt @@ -195,14 +195,16 @@ A list of wildcard environment variable names which shall be copied from the tox invocation environment to the test - environment. If a specified environment variable doesn't exist in the tox - invocation environment it is ignored. You can use ``*`` and ``?`` to - match multiple environment variables with one name. + environment when executing test commands. If a specified environment + variable doesn't exist in the tox invocation environment it is ignored. + You can use ``*`` and ``?`` to match multiple environment variables with + one name. - Note that the ``PATH`` and ``PIP_INDEX_URL`` variables are unconditionally - passed down and on Windows ``SYSTEMROOT``, ``PATHEXT``, ``TEMP`` and ``TMP`` - will be passed down as well whereas on unix ``TMPDIR`` will be passed down. - You can override these variables with the ``setenv`` option. + Note that the ``PATH``, ``LANG`` and ``PIP_INDEX_URL`` variables are + unconditionally passed down and on Windows ``SYSTEMROOT``, ``PATHEXT``, + ``TEMP`` and ``TMP`` will be passed down as well whereas on unix + ``TMPDIR`` will be passed down. You can override these variables + with the ``setenv`` option. .. confval:: recreate=True|False(default) diff -r 438a7c640081598e9d97d100fcbf5122c327e04f -r 84af17ba72801f740b12663e6f4cc56cbbb404eb setup.py --- a/setup.py +++ b/setup.py @@ -48,7 +48,7 @@ description='virtualenv-based automation of test activities', long_description=open("README.rst").read(), url='http://tox.testrun.org/', - version='2.0.2', + version='2.1.0.dev1', license='http://opensource.org/licenses/MIT', platforms=['unix', 'linux', 'osx', 'cygwin', 'win32'], author='holger krekel', diff -r 438a7c640081598e9d97d100fcbf5122c327e04f -r 84af17ba72801f740b12663e6f4cc56cbbb404eb tests/test_venv.py --- a/tests/test_venv.py +++ b/tests/test_venv.py @@ -510,6 +510,7 @@ def test_env_variables_added_to_pcall(tmpdir, mocksession, newconfig, monkeypatch): pkg = tmpdir.ensure("package.tar.gz") monkeypatch.setenv("X123", "123") + monkeypatch.setenv("YY", "456") config = newconfig([], """ [testenv:python] commands=python -V @@ -533,9 +534,12 @@ assert env['ENV_VAR'] == 'value' assert env['VIRTUAL_ENV'] == str(venv.path) assert env['X123'] == "123" + # all env variables are passed for installation + assert l[0].env["YY"] == "456" + assert "YY" not in l[1].env assert set(["ENV_VAR", "VIRTUAL_ENV", "PYTHONHASHSEED", "X123", "PATH"])\ - .issubset(env) + .issubset(l[1].env) # for e in os.environ: # assert e in env @@ -614,49 +618,3 @@ x4 = venv.getcommandpath("x", cwd=tmpdir) assert x4.endswith(os.sep + 'x') mocksession.report.expect("warning", "*test command found but not*") - - -def test_sethome_only_on_option(newmocksession, monkeypatch): - mocksession = newmocksession([], "") - venv = mocksession.getenv('python') - action = mocksession.newaction(venv, "qwe", []) - monkeypatch.setattr(tox.venv, "hack_home_env", None) - venv._install(["x"], action=action) - - -def test_sethome_works_on_option(newmocksession, monkeypatch): - mocksession = newmocksession(["--set-home", "-i ALL=http://qwe"], "") - venv = mocksession.getenv('python') - action = mocksession.newaction(venv, "qwe", []) - venv._install(["x"], action=action) - _, mocked = mocksession.report.getnext("logpopen") - p = mocked.env["HOME"] - pydist = py.path.local(p).join(".pydistutils.cfg") - assert "http://qwe" in pydist.read() - - -def test_hack_home_env(tmpdir): - from tox.venv import hack_home_env - env = hack_home_env(tmpdir, "http://index") - assert env["HOME"] == str(tmpdir) - assert env["PIP_INDEX_URL"] == "http://index" - assert "index_url = http://index" in \ - tmpdir.join(".pydistutils.cfg").read() - tmpdir.remove() - env = hack_home_env(tmpdir, None) - assert env["HOME"] == str(tmpdir) - assert not tmpdir.join(".pydistutils.cfg").check() - assert "PIP_INDEX_URL" not in env - - -def test_hack_home_env_passthrough(tmpdir, monkeypatch): - from tox.venv import hack_home_env - env = hack_home_env(tmpdir, "http://index") - monkeypatch.setattr(os, "environ", env) - - tmpdir = tmpdir.mkdir("tmpdir2") - env2 = hack_home_env(tmpdir) - assert env2["HOME"] == str(tmpdir) - assert env2["PIP_INDEX_URL"] == "http://index" - assert "index_url = http://index" in \ - tmpdir.join(".pydistutils.cfg").read() diff -r 438a7c640081598e9d97d100fcbf5122c327e04f -r 84af17ba72801f740b12663e6f4cc56cbbb404eb tox/__init__.py --- a/tox/__init__.py +++ b/tox/__init__.py @@ -1,5 +1,5 @@ # -__version__ = '2.0.2' +__version__ = '2.1.0.dev1' from .hookspecs import hookspec, hookimpl # noqa diff -r 438a7c640081598e9d97d100fcbf5122c327e04f -r 84af17ba72801f740b12663e6f4cc56cbbb404eb tox/config.py --- a/tox/config.py +++ b/tox/config.py @@ -283,10 +283,6 @@ parser.add_argument("--develop", action="store_true", dest="develop", help="install package in the venv using 'setup.py develop' via " "'pip -e .'") - parser.add_argument("--set-home", action="store_true", dest="sethome", - help="(experimental) force creating a new $HOME for each test " - "environment and create .pydistutils.cfg|pip.conf files " - "if index servers are specified with tox. ") parser.add_argument('-i', action="append", dest="indexurl", metavar="URL", help="set indexserver url (if URL is of form name=url set the " @@ -299,11 +295,9 @@ dest="recreate", help="force recreation of virtual environments") parser.add_argument("--result-json", action="store", - dest="resultjson", metavar="PATH", - help="write a json file with detailed information about " - "all commands and results involved. This will turn off " - "pass-through output from running test commands which is " - "instead captured into the json result file.") + dest="resultjson", metavar="PATH", + help="write a json file with detailed information about " + "all commands and results involved.") # We choose 1 to 4294967295 because it is the range of PYTHONHASHSEED. parser.add_argument("--hashseed", action="store", @@ -414,8 +408,11 @@ parser.add_testenv_attribute( name="passenv", type="space-separated-list", postprocess=passenv, - help="environment variables names which shall be passed " - "from tox invocation to test environment when executing commands.") + help="environment variables needed during executing test commands " + "(taken from invocation environment). Note that tox always " + "passes through some basic environment variables which are " + "needed for basic functioning of the Python system. " + "See --showconfig for the eventual passenv setting.") parser.add_testenv_attribute( name="whitelist_externals", type="line-list", diff -r 438a7c640081598e9d97d100fcbf5122c327e04f -r 84af17ba72801f740b12663e6f4cc56cbbb404eb tox/venv.py --- a/tox/venv.py +++ b/tox/venv.py @@ -259,9 +259,7 @@ l.append("--pre") return l - def run_install_command(self, packages, options=(), - indexserver=None, action=None, - extraenv=None): + def run_install_command(self, packages, options=(), action=None): argv = self.envconfig.install_command[:] # use pip-script on win32 to avoid the executable locking i = argv.index('{packages}') @@ -269,18 +267,14 @@ if '{opts}' in argv: i = argv.index('{opts}') argv[i:i + 1] = list(options) + for x in ('PIP_RESPECT_VIRTUALENV', 'PIP_REQUIRE_VIRTUALENV', '__PYVENV_LAUNCHER__'): - try: - del os.environ[x] - except KeyError: - pass + os.environ.pop(x, None) + old_stdout = sys.stdout sys.stdout = codecs.getwriter('utf8')(sys.stdout) - if extraenv is None: - extraenv = {} - self._pcall(argv, cwd=self.envconfig.config.toxinidir, - extraenv=extraenv, action=action) + self._pcall(argv, cwd=self.envconfig.config.toxinidir, action=action) sys.stdout = old_stdout def _install(self, deps, extraopts=None, action=None): @@ -302,31 +296,29 @@ assert ixserver.url is None or isinstance(ixserver.url, str) for ixserver in l: - if self.envconfig.config.option.sethome: - extraenv = hack_home_env( - homedir=self.envconfig.envtmpdir.join("pseudo-home"), - index_url=ixserver.url) - else: - extraenv = {} - packages = d[ixserver] options = self._installopts(ixserver.url) if extraopts: options.extend(extraopts) self.run_install_command(packages=packages, options=options, - action=action, extraenv=extraenv) + action=action) - def _getenv(self, extraenv={}): - env = {} - for envname in self.envconfig.passenv: - if envname in os.environ: - env[envname] = os.environ[envname] + def _getenv(self, testcommand=False): + if testcommand: + # for executing tests we construct a clean environment + env = {} + for envname in self.envconfig.passenv: + if envname in os.environ: + env[envname] = os.environ[envname] + else: + # for executing non-test commands we use the full + # invocation environment + env = os.environ.copy() + # in any case we honor per-testenv setenv configuration env.update(self.envconfig.setenv) env['VIRTUAL_ENV'] = str(self.path) - - env.update(extraenv) return env def test(self, redirect=False): @@ -335,7 +327,7 @@ self.status = 0 self.session.make_emptydir(self.envconfig.envtmpdir) cwd = self.envconfig.changedir - env = self._getenv() + env = self._getenv(testcommand=True) # Display PYTHONHASHSEED to assist with reproducibility. action.setactivity("runtests", "PYTHONHASHSEED=%r" % env.get('PYTHONHASHSEED')) for i, argv in enumerate(self.envconfig.commands): @@ -357,7 +349,7 @@ try: self._pcall(argv, cwd=cwd, action=action, redirect=redirect, - ignore_ret=ignore_ret) + ignore_ret=ignore_ret, testcommand=True) except tox.exception.InvocationError as err: self.session.report.error(str(err)) self.status = "commands failed" @@ -368,20 +360,18 @@ self.session.report.error(self.status) raise - def _pcall(self, args, venv=True, cwd=None, extraenv={}, + def _pcall(self, args, cwd, venv=True, testcommand=False, action=None, redirect=True, ignore_ret=False): for name in ("VIRTUALENV_PYTHON", "PYTHONDONTWRITEBYTECODE"): - try: - del os.environ[name] - except KeyError: - pass - assert cwd + os.environ.pop(name, None) + cwd.ensure(dir=1) old = self.patchPATH() try: args[0] = self.getcommandpath(args[0], venv, cwd) - env = self._getenv(extraenv) - return action.popen(args, cwd=cwd, env=env, redirect=redirect, ignore_ret=ignore_ret) + env = self._getenv(testcommand=testcommand) + return action.popen(args, cwd=cwd, env=env, + redirect=redirect, ignore_ret=ignore_ret) finally: os.environ['PATH'] = old @@ -398,25 +388,3 @@ if not path.check(file=1): return "0" * 32 return path.computehash() - - -def hack_home_env(homedir, index_url=None): - # XXX HACK (this could also live with tox itself, consider) - # if tox uses pip on a package that requires setup_requires - # the index url set with pip is usually not recognized - # because it is setuptools executing very early. - # We therefore run the tox command in an artifical home - # directory and set .pydistutils.cfg and pip.conf files - # accordingly. - if not homedir.check(): - homedir.ensure(dir=1) - d = dict(HOME=str(homedir)) - if not index_url: - index_url = os.environ.get("TOX_INDEX_URL") - if index_url: - homedir.join(".pydistutils.cfg").write( - "[easy_install]\n" - "index_url = %s\n" % index_url) - d["PIP_INDEX_URL"] = index_url - d["TOX_INDEX_URL"] = index_url - return d https://bitbucket.org/hpk42/tox/commits/32306cad6374/ Changeset: 32306cad6374 Branch: passenv_multiline User: stefano-m Date: 2015-06-18 21:56:38+00:00 Summary: make passenv attribute type line-list Affected #: 2 files diff -r 84af17ba72801f740b12663e6f4cc56cbbb404eb -r 32306cad63746de7c48760d0021df8371b4269be tests/test_config.py --- a/tests/test_config.py +++ b/tests/test_config.py @@ -733,6 +733,7 @@ config = newconfig(""" [testenv] passenv = + # comment A123* B?23 """) assert len(config.envconfigs) == 1 @@ -754,20 +755,39 @@ def test_passenv_with_factor(self, tmpdir, newconfig, monkeypatch): monkeypatch.setenv("A123A", "a") monkeypatch.setenv("A123B", "b") + monkeypatch.setenv("A123C", "c") + monkeypatch.setenv("A123D", "d") monkeypatch.setenv("BX23", "0") + monkeypatch.setenv("CCA43", "3") + monkeypatch.setenv("CB21", "4") config = newconfig(""" [tox] envlist = {x1,x2} [testenv] passenv = - x1: A123A - x2: A123B + x1: A123A CC* + x1: CB21 + # passed to both environments + A123C + x2: A123B A123D """) assert len(config.envconfigs) == 2 + assert "A123A" in config.envconfigs["x1"].passenv + assert "A123C" in config.envconfigs["x1"].passenv + assert "CCA43" in config.envconfigs["x1"].passenv + assert "CB21" in config.envconfigs["x1"].passenv assert "A123B" not in config.envconfigs["x1"].passenv + assert "A123D" not in config.envconfigs["x1"].passenv + assert "BX23" not in config.envconfigs["x1"].passenv + assert "A123B" in config.envconfigs["x2"].passenv + assert "A123D" in config.envconfigs["x2"].passenv assert "A123A" not in config.envconfigs["x2"].passenv + assert "A123C" in config.envconfigs["x2"].passenv + assert "CCA43" not in config.envconfigs["x2"].passenv + assert "CB21" not in config.envconfigs["x2"].passenv + assert "BX23" not in config.envconfigs["x2"].passenv def test_changedir_override(self, tmpdir, newconfig): config = newconfig(""" diff -r 84af17ba72801f740b12663e6f4cc56cbbb404eb -r 32306cad63746de7c48760d0021df8371b4269be tox/config.py --- a/tox/config.py +++ b/tox/config.py @@ -381,10 +381,10 @@ help="list of X=Y lines with environment variable settings") def passenv(testenv_config, value): - if len(value) == 1 and "\n" in value[0]: - # If we have a list of 1 element that contains new lines, - # passenv has been specified as a multi line list. - value = value[0].split("\n") + # Flatten the list to deal with space-separated values. + value = list( + itertools.chain.from_iterable( + [x.split(' ') for x in value])) passenv = set(["PATH", "PIP_INDEX_URL", "LANG"]) @@ -407,7 +407,7 @@ return passenv parser.add_testenv_attribute( - name="passenv", type="space-separated-list", postprocess=passenv, + name="passenv", type="line-list", postprocess=passenv, help="environment variables needed during executing test commands " "(taken from invocation environment). Note that tox always " "passes through some basic environment variables which are " https://bitbucket.org/hpk42/tox/commits/37c6dac2b848/ Changeset: 37c6dac2b848 User: hpk42 Date: 2015-06-19 09:10:16+00:00 Summary: Merged in stefano-m/tox/passenv_multiline (pull request #166) Issue #259 passenv statement should accept multi-line list Affected #: 2 files diff -r 673d3f1f8d095cd8e1733506ffdc2f162b93e59b -r 37c6dac2b8484bb084dd6fff81fde108e5b96a4c tests/test_config.py --- a/tests/test_config.py +++ b/tests/test_config.py @@ -696,14 +696,45 @@ assert envconfig.setenv['ANOTHER_VAL'] == 'else' @pytest.mark.parametrize("plat", ["win32", "linux2"]) - def test_passenv(self, tmpdir, newconfig, monkeypatch, plat): + def test_passenv_as_multiline_list(self, tmpdir, newconfig, monkeypatch, plat): monkeypatch.setattr(sys, "platform", plat) monkeypatch.setenv("A123A", "a") monkeypatch.setenv("A123B", "b") monkeypatch.setenv("BX23", "0") config = newconfig(""" [testenv] - passenv = A123* B?23 + passenv = + A123* + # isolated comment + B?23 + """) + assert len(config.envconfigs) == 1 + envconfig = config.envconfigs['python'] + if plat == "win32": + assert "PATHEXT" in envconfig.passenv + assert "SYSTEMDRIVE" in envconfig.passenv + assert "SYSTEMROOT" in envconfig.passenv + assert "TEMP" in envconfig.passenv + assert "TMP" in envconfig.passenv + else: + assert "TMPDIR" in envconfig.passenv + assert "PATH" in envconfig.passenv + assert "PIP_INDEX_URL" in envconfig.passenv + assert "LANG" in envconfig.passenv + assert "A123A" in envconfig.passenv + assert "A123B" in envconfig.passenv + + @pytest.mark.parametrize("plat", ["win32", "linux2"]) + def test_passenv_as_space_separated_list(self, tmpdir, newconfig, monkeypatch, plat): + monkeypatch.setattr(sys, "platform", plat) + monkeypatch.setenv("A123A", "a") + monkeypatch.setenv("A123B", "b") + monkeypatch.setenv("BX23", "0") + config = newconfig(""" + [testenv] + passenv = + # comment + A123* B?23 """) assert len(config.envconfigs) == 1 envconfig = config.envconfigs['python'] @@ -724,20 +755,39 @@ def test_passenv_with_factor(self, tmpdir, newconfig, monkeypatch): monkeypatch.setenv("A123A", "a") monkeypatch.setenv("A123B", "b") + monkeypatch.setenv("A123C", "c") + monkeypatch.setenv("A123D", "d") monkeypatch.setenv("BX23", "0") + monkeypatch.setenv("CCA43", "3") + monkeypatch.setenv("CB21", "4") config = newconfig(""" [tox] envlist = {x1,x2} [testenv] passenv = - x1: A123A - x2: A123B + x1: A123A CC* + x1: CB21 + # passed to both environments + A123C + x2: A123B A123D """) assert len(config.envconfigs) == 2 + assert "A123A" in config.envconfigs["x1"].passenv + assert "A123C" in config.envconfigs["x1"].passenv + assert "CCA43" in config.envconfigs["x1"].passenv + assert "CB21" in config.envconfigs["x1"].passenv assert "A123B" not in config.envconfigs["x1"].passenv + assert "A123D" not in config.envconfigs["x1"].passenv + assert "BX23" not in config.envconfigs["x1"].passenv + assert "A123B" in config.envconfigs["x2"].passenv + assert "A123D" in config.envconfigs["x2"].passenv assert "A123A" not in config.envconfigs["x2"].passenv + assert "A123C" in config.envconfigs["x2"].passenv + assert "CCA43" not in config.envconfigs["x2"].passenv + assert "CB21" not in config.envconfigs["x2"].passenv + assert "BX23" not in config.envconfigs["x2"].passenv def test_changedir_override(self, tmpdir, newconfig): config = newconfig(""" diff -r 673d3f1f8d095cd8e1733506ffdc2f162b93e59b -r 37c6dac2b8484bb084dd6fff81fde108e5b96a4c tox/config.py --- a/tox/config.py +++ b/tox/config.py @@ -381,6 +381,11 @@ help="list of X=Y lines with environment variable settings") def passenv(testenv_config, value): + # Flatten the list to deal with space-separated values. + value = list( + itertools.chain.from_iterable( + [x.split(' ') for x in value])) + passenv = set(["PATH", "PIP_INDEX_URL", "LANG"]) # we ensure that tmp directory settings are passed on @@ -402,7 +407,7 @@ return passenv parser.add_testenv_attribute( - name="passenv", type="space-separated-list", postprocess=passenv, + name="passenv", type="line-list", postprocess=passenv, help="environment variables needed during executing test commands " "(taken from invocation environment). Note that tox always " "passes through some basic environment variables which are " Repository URL: https://bitbucket.org/hpk42/tox/ -- This is a commit notification from bitbucket.org. You are receiving this because you have the service enabled, addressing the recipient of this email. From commits-noreply at bitbucket.org Fri Jun 19 11:12:33 2015 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Fri, 19 Jun 2015 09:12:33 -0000 Subject: [Pytest-commit] commit/tox: hpk42: add changelog to fix issue259: passenv is now a line-list which allows to intersperse Message-ID: <20150619091233.14151.82828@app14.ash-private.bitbucket.org> 1 new commit in tox: https://bitbucket.org/hpk42/tox/commits/a04c5bcd5ae6/ Changeset: a04c5bcd5ae6 User: hpk42 Date: 2015-06-19 09:12:26+00:00 Summary: add changelog to fix issue259: passenv is now a line-list which allows to intersperse comments. Thanks stefano-m. Affected #: 1 file diff -r 37c6dac2b8484bb084dd6fff81fde108e5b96a4c -r a04c5bcd5ae6eebc6c386a77b175a3df99699f20 CHANGELOG --- a/CHANGELOG +++ b/CHANGELOG @@ -8,6 +8,9 @@ hackily implemented (if people want home-directory isolation we should figure out a better way to do it, possibly through a plugin) +- fix issue259: passenv is now a line-list which allows to intersperse + comments. Thanks stefano-m. + 2.0.2 ---------- Repository URL: https://bitbucket.org/hpk42/tox/ -- This is a commit notification from bitbucket.org. You are receiving this because you have the service enabled, addressing the recipient of this email. From commits-noreply at bitbucket.org Fri Jun 19 11:13:40 2015 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Fri, 19 Jun 2015 09:13:40 -0000 Subject: [Pytest-commit] commit/tox: 2 new changesets Message-ID: <20150619091340.12729.17333@app13.ash-private.bitbucket.org> 2 new commits in tox: https://bitbucket.org/hpk42/tox/commits/7562481b0c4b/ Changeset: 7562481b0c4b User: acaron Date: 2015-06-05 03:37:44+00:00 Summary: Adds support for multiline envlist setting. Affected #: 2 files diff -r 93f262ca702e8d2e1192875ded9eaa584de3aa00 -r 7562481b0c4bb28fa100ba65b05202f4fa10f1ca tests/test_config.py --- a/tests/test_config.py +++ b/tests/test_config.py @@ -1273,6 +1273,17 @@ assert config.envlist == \ ["py26-dep1", "py26-dep2", "py27-dep1", "py27-dep2"] + def test_envlist_multiline(self, newconfig): + inisource = """ + [tox] + envlist = + py27 + py34 + """ + config = newconfig([], inisource) + assert config.envlist == \ + ["py27", "py34"] + def test_minversion(self, tmpdir, newconfig, monkeypatch): inisource = """ [tox] diff -r 93f262ca702e8d2e1192875ded9eaa584de3aa00 -r 7562481b0c4bb28fa100ba65b05202f4fa10f1ca tox/config.py --- a/tox/config.py +++ b/tox/config.py @@ -731,6 +731,8 @@ def _split_env(env): """if handed a list, action="append" was used for -e """ if not isinstance(env, list): + if '\n' in env: + env = ','.join(env.split('\n')) env = [env] return mapcat(_expand_envstr, env) https://bitbucket.org/hpk42/tox/commits/b4480d153f61/ Changeset: b4480d153f61 User: hpk42 Date: 2015-06-19 09:13:38+00:00 Summary: Merged in acaron/tox (pull request #163) Adds support for multiline envlist setting. Affected #: 2 files diff -r a04c5bcd5ae6eebc6c386a77b175a3df99699f20 -r b4480d153f612b6773a4bdac829b211277faf0dc tests/test_config.py --- a/tests/test_config.py +++ b/tests/test_config.py @@ -1323,6 +1323,17 @@ assert config.envlist == \ ["py26-dep1", "py26-dep2", "py27-dep1", "py27-dep2"] + def test_envlist_multiline(self, newconfig): + inisource = """ + [tox] + envlist = + py27 + py34 + """ + config = newconfig([], inisource) + assert config.envlist == \ + ["py27", "py34"] + def test_minversion(self, tmpdir, newconfig, monkeypatch): inisource = """ [tox] diff -r a04c5bcd5ae6eebc6c386a77b175a3df99699f20 -r b4480d153f612b6773a4bdac829b211277faf0dc tox/config.py --- a/tox/config.py +++ b/tox/config.py @@ -733,6 +733,8 @@ def _split_env(env): """if handed a list, action="append" was used for -e """ if not isinstance(env, list): + if '\n' in env: + env = ','.join(env.split('\n')) env = [env] return mapcat(_expand_envstr, env) Repository URL: https://bitbucket.org/hpk42/tox/ -- This is a commit notification from bitbucket.org. You are receiving this because you have the service enabled, addressing the recipient of this email. From commits-noreply at bitbucket.org Fri Jun 19 11:13:40 2015 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Fri, 19 Jun 2015 09:13:40 -0000 Subject: [Pytest-commit] commit/tox: hpk42: Merged in acaron/tox (pull request #163) Message-ID: <20150619091340.3563.76384@app12.ash-private.bitbucket.org> 1 new commit in tox: https://bitbucket.org/hpk42/tox/commits/b4480d153f61/ Changeset: b4480d153f61 User: hpk42 Date: 2015-06-19 09:13:38+00:00 Summary: Merged in acaron/tox (pull request #163) Adds support for multiline envlist setting. Affected #: 2 files diff -r a04c5bcd5ae6eebc6c386a77b175a3df99699f20 -r b4480d153f612b6773a4bdac829b211277faf0dc tests/test_config.py --- a/tests/test_config.py +++ b/tests/test_config.py @@ -1323,6 +1323,17 @@ assert config.envlist == \ ["py26-dep1", "py26-dep2", "py27-dep1", "py27-dep2"] + def test_envlist_multiline(self, newconfig): + inisource = """ + [tox] + envlist = + py27 + py34 + """ + config = newconfig([], inisource) + assert config.envlist == \ + ["py27", "py34"] + def test_minversion(self, tmpdir, newconfig, monkeypatch): inisource = """ [tox] diff -r a04c5bcd5ae6eebc6c386a77b175a3df99699f20 -r b4480d153f612b6773a4bdac829b211277faf0dc tox/config.py --- a/tox/config.py +++ b/tox/config.py @@ -733,6 +733,8 @@ def _split_env(env): """if handed a list, action="append" was used for -e """ if not isinstance(env, list): + if '\n' in env: + env = ','.join(env.split('\n')) env = [env] return mapcat(_expand_envstr, env) Repository URL: https://bitbucket.org/hpk42/tox/ -- This is a commit notification from bitbucket.org. You are receiving this because you have the service enabled, addressing the recipient of this email. From commits-noreply at bitbucket.org Fri Jun 19 11:21:51 2015 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Fri, 19 Jun 2015 09:21:51 -0000 Subject: [Pytest-commit] commit/tox: hpk42: allow envlist to be a multi-line list, to intersperse comments Message-ID: <20150619092151.5277.35252@app05.ash-private.bitbucket.org> 1 new commit in tox: https://bitbucket.org/hpk42/tox/commits/6d2c16ca3415/ Changeset: 6d2c16ca3415 User: hpk42 Date: 2015-06-19 09:21:44+00:00 Summary: allow envlist to be a multi-line list, to intersperse comments and have long envlist settings split more naturally. Thanks Andre Caron. Affected #: 1 file diff -r b4480d153f612b6773a4bdac829b211277faf0dc -r 6d2c16ca341567fc5976a22834c9908777033f93 CHANGELOG --- a/CHANGELOG +++ b/CHANGELOG @@ -11,6 +11,9 @@ - fix issue259: passenv is now a line-list which allows to intersperse comments. Thanks stefano-m. +- allow envlist to be a multi-line list, to intersperse comments + and have long envlist settings split more naturally. Thanks Andre Caron. + 2.0.2 ---------- Repository URL: https://bitbucket.org/hpk42/tox/ -- This is a commit notification from bitbucket.org. You are receiving this because you have the service enabled, addressing the recipient of this email. From commits-noreply at bitbucket.org Fri Jun 19 11:37:23 2015 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Fri, 19 Jun 2015 09:37:23 -0000 Subject: [Pytest-commit] commit/tox: hpk42: introduce a TOX_TESTENV_PASSENV setting which is honored Message-ID: <20150619093723.18004.86858@app08.ash-private.bitbucket.org> 1 new commit in tox: https://bitbucket.org/hpk42/tox/commits/9416611f7bf9/ Changeset: 9416611f7bf9 User: hpk42 Date: 2015-06-19 09:37:08+00:00 Summary: introduce a TOX_TESTENV_PASSENV setting which is honored when constructing the set of environment variables for test environments. Affected #: 6 files diff -r 6d2c16ca341567fc5976a22834c9908777033f93 -r 9416611f7bf93c4c85bb48ed2f5f4fe4f3151f97 CHANGELOG --- a/CHANGELOG +++ b/CHANGELOG @@ -14,6 +14,10 @@ - allow envlist to be a multi-line list, to intersperse comments and have long envlist settings split more naturally. Thanks Andre Caron. +- introduce a TOX_TESTENV_PASSENV setting which is honored + when constructing the set of environment variables for test environments. + Thanks Marc Abramowitz for pushing in this direction. + 2.0.2 ---------- diff -r 6d2c16ca341567fc5976a22834c9908777033f93 -r 9416611f7bf93c4c85bb48ed2f5f4fe4f3151f97 setup.py --- a/setup.py +++ b/setup.py @@ -48,7 +48,7 @@ description='virtualenv-based automation of test activities', long_description=open("README.rst").read(), url='http://tox.testrun.org/', - version='2.1.0.dev1', + version='2.1.0.dev2', license='http://opensource.org/licenses/MIT', platforms=['unix', 'linux', 'osx', 'cygwin', 'win32'], author='holger krekel', diff -r 6d2c16ca341567fc5976a22834c9908777033f93 -r 9416611f7bf93c4c85bb48ed2f5f4fe4f3151f97 tests/test_config.py --- a/tests/test_config.py +++ b/tests/test_config.py @@ -789,6 +789,18 @@ assert "CB21" not in config.envconfigs["x2"].passenv assert "BX23" not in config.envconfigs["x2"].passenv + def test_passenv_from_global_env(self, tmpdir, newconfig, monkeypatch): + monkeypatch.setenv("A1", "a1") + monkeypatch.setenv("A2", "a2") + monkeypatch.setenv("TOX_TESTENV_PASSENV", "A1") + config = newconfig(""" + [testenv] + passenv = A2 + """) + env = config.envconfigs["python"] + assert "A1" in env.passenv + assert "A2" in env.passenv + def test_changedir_override(self, tmpdir, newconfig): config = newconfig(""" [testenv] diff -r 6d2c16ca341567fc5976a22834c9908777033f93 -r 9416611f7bf93c4c85bb48ed2f5f4fe4f3151f97 tox/__init__.py --- a/tox/__init__.py +++ b/tox/__init__.py @@ -1,5 +1,5 @@ # -__version__ = '2.1.0.dev1' +__version__ = '2.1.0.dev2' from .hookspecs import hookspec, hookimpl # noqa diff -r 6d2c16ca341567fc5976a22834c9908777033f93 -r 9416611f7bf93c4c85bb48ed2f5f4fe4f3151f97 tox/config.py --- a/tox/config.py +++ b/tox/config.py @@ -295,9 +295,9 @@ dest="recreate", help="force recreation of virtual environments") parser.add_argument("--result-json", action="store", - dest="resultjson", metavar="PATH", - help="write a json file with detailed information about " - "all commands and results involved.") + dest="resultjson", metavar="PATH", + help="write a json file with detailed information " + "about all commands and results involved.") # We choose 1 to 4294967295 because it is the range of PYTHONHASHSEED. parser.add_argument("--hashseed", action="store", @@ -388,6 +388,11 @@ passenv = set(["PATH", "PIP_INDEX_URL", "LANG"]) + # read in global passenv settings + p = os.environ.get("TOX_TESTENV_PASSENV", None) + if p is not None: + passenv.update(x for x in p.split() if x) + # we ensure that tmp directory settings are passed on # we could also set it to the per-venv "envtmpdir" # but this leads to very long paths when run with jenkins diff -r 6d2c16ca341567fc5976a22834c9908777033f93 -r 9416611f7bf93c4c85bb48ed2f5f4fe4f3151f97 tox/session.py --- a/tox/session.py +++ b/tox/session.py @@ -46,6 +46,12 @@ tw = py.io.TerminalWriter() tw.write(config._parser._format_help()) tw.line() + tw.line("Environment variables", bold=True) + tw.line("TOXENV: comma separated list of environments " + "(overridable by '-e')") + tw.line("TOX_TESTENV_PASSENV: space-separated list of extra " + "environment variables to be passed into test command " + "environemnts") def show_help_ini(config): Repository URL: https://bitbucket.org/hpk42/tox/ -- This is a commit notification from bitbucket.org. You are receiving this because you have the service enabled, addressing the recipient of this email. From commits-noreply at bitbucket.org Fri Jun 19 16:08:51 2015 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Fri, 19 Jun 2015 14:08:51 -0000 Subject: [Pytest-commit] commit/tox: 2 new changesets Message-ID: <20150619140851.5970.35127@app04.ash-private.bitbucket.org> 2 new commits in tox: https://bitbucket.org/hpk42/tox/commits/25c76c46dbdd/ Changeset: 25c76c46dbdd User: hpk42 Date: 2015-06-19 13:29:21+00:00 Summary: prepare 2.1.0 release Affected #: 4 files diff -r 9416611f7bf93c4c85bb48ed2f5f4fe4f3151f97 -r 25c76c46dbdd91126d2c6696cc1b4e97db6588f6 doc/conf.py --- a/doc/conf.py +++ b/doc/conf.py @@ -48,8 +48,8 @@ # built documents. # # The short X.Y version. -release = "2.0" -version = "2.0.1" +release = "2.1" +version = "2.1.0" # The full version, including alpha/beta/rc tags. # The language for content autogenerated by Sphinx. Refer to documentation diff -r 9416611f7bf93c4c85bb48ed2f5f4fe4f3151f97 -r 25c76c46dbdd91126d2c6696cc1b4e97db6588f6 doc/config.txt --- a/doc/config.txt +++ b/doc/config.txt @@ -206,6 +206,10 @@ ``TMPDIR`` will be passed down. You can override these variables with the ``setenv`` option. + If defined the ``TOX_TESTENV_PASSENV`` environment variable (in the tox + invocation environment) can define additional space-separated variable + names that are to be passed down to the test command environment. + .. confval:: recreate=True|False(default) Always recreate virtual environment if this option is True. diff -r 9416611f7bf93c4c85bb48ed2f5f4fe4f3151f97 -r 25c76c46dbdd91126d2c6696cc1b4e97db6588f6 setup.py --- a/setup.py +++ b/setup.py @@ -48,7 +48,7 @@ description='virtualenv-based automation of test activities', long_description=open("README.rst").read(), url='http://tox.testrun.org/', - version='2.1.0.dev2', + version='2.1.0', license='http://opensource.org/licenses/MIT', platforms=['unix', 'linux', 'osx', 'cygwin', 'win32'], author='holger krekel', diff -r 9416611f7bf93c4c85bb48ed2f5f4fe4f3151f97 -r 25c76c46dbdd91126d2c6696cc1b4e97db6588f6 tox/__init__.py --- a/tox/__init__.py +++ b/tox/__init__.py @@ -1,5 +1,5 @@ # -__version__ = '2.1.0.dev2' +__version__ = '2.1.0' from .hookspecs import hookspec, hookimpl # noqa https://bitbucket.org/hpk42/tox/commits/fb8ff963c77a/ Changeset: fb8ff963c77a User: hpk42 Date: 2015-06-19 14:08:41+00:00 Summary: Added tag 2.1.0 for changeset 25c76c46dbdd Affected #: 1 file diff -r 25c76c46dbdd91126d2c6696cc1b4e97db6588f6 -r fb8ff963c77a5e9189559597cf45cca2a88c03a0 .hgtags --- a/.hgtags +++ b/.hgtags @@ -26,3 +26,4 @@ b7e498efd0ecd543a870431ea8d34f2882d5ace8 2.0.0 2897c9e3a019ee29948cbeda319ffac0e6902053 2.0.1 82561ff2cbf48d8bf5be1384f5f3bd04c805fd30 2.0.2 +25c76c46dbdd91126d2c6696cc1b4e97db6588f6 2.1.0 Repository URL: https://bitbucket.org/hpk42/tox/ -- This is a commit notification from bitbucket.org. You are receiving this because you have the service enabled, addressing the recipient of this email. From commits-noreply at bitbucket.org Tue Jun 23 13:50:35 2015 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Tue, 23 Jun 2015 11:50:35 -0000 Subject: [Pytest-commit] commit/tox: 2 new changesets Message-ID: <20150623115035.29636.368@app04.ash-private.bitbucket.org> 2 new commits in tox: https://bitbucket.org/hpk42/tox/commits/09fd6a94e281/ Changeset: 09fd6a94e281 User: hpk42 Date: 2015-06-23 11:49:40+00:00 Summary: some fixes for detox, preparing 2.1.1 Affected #: 7 files diff -r fb8ff963c77a5e9189559597cf45cca2a88c03a0 -r 09fd6a94e2812f6cdfddfc6979be9b481af802a3 CHANGELOG --- a/CHANGELOG +++ b/CHANGELOG @@ -1,3 +1,10 @@ +2.1.1 +---------- + +- fix platform skipping for detox + +- report skipped platforms as skips in the summary + 2.1.0 ---------- diff -r fb8ff963c77a5e9189559597cf45cca2a88c03a0 -r 09fd6a94e2812f6cdfddfc6979be9b481af802a3 setup.py --- a/setup.py +++ b/setup.py @@ -48,7 +48,7 @@ description='virtualenv-based automation of test activities', long_description=open("README.rst").read(), url='http://tox.testrun.org/', - version='2.1.0', + version='2.1.1', license='http://opensource.org/licenses/MIT', platforms=['unix', 'linux', 'osx', 'cygwin', 'win32'], author='holger krekel', diff -r fb8ff963c77a5e9189559597cf45cca2a88c03a0 -r 09fd6a94e2812f6cdfddfc6979be9b481af802a3 tests/test_venv.py --- a/tests/test_venv.py +++ b/tests/test_venv.py @@ -473,31 +473,24 @@ class TestVenvTest: - def test_patchPATH(self, newmocksession, monkeypatch): + def test_envbinddir_path(self, newmocksession, monkeypatch): monkeypatch.setenv("PIP_RESPECT_VIRTUALENV", "1") mocksession = newmocksession([], """ [testenv:python] commands=abc """) venv = mocksession.getenv("python") - envconfig = venv.envconfig monkeypatch.setenv("PATH", "xyz") - oldpath = venv.patchPATH() - assert oldpath == "xyz" - res = os.environ['PATH'] - assert res == "%s%sxyz" % (envconfig.envbindir, os.pathsep) - p = "xyz" + os.pathsep + str(envconfig.envbindir) - monkeypatch.setenv("PATH", p) - venv.patchPATH() - res = os.environ['PATH'] - assert res == "%s%s%s" % (envconfig.envbindir, os.pathsep, p) + l = [] + monkeypatch.setattr("py.path.local.sysfind", classmethod( + lambda *args, **kwargs: l.append(kwargs) or 0 / 0)) - assert envconfig.commands - monkeypatch.setattr(venv, '_pcall', lambda *args, **kwargs: 0 / 0) py.test.raises(ZeroDivisionError, "venv._install(list('123'))") + assert l.pop()["paths"] == [venv.envconfig.envbindir] py.test.raises(ZeroDivisionError, "venv.test()") + assert l.pop()["paths"] == [venv.envconfig.envbindir] py.test.raises(ZeroDivisionError, "venv.run_install_command(['qwe'])") - py.test.raises(ZeroDivisionError, "venv._pcall([1,2,3])") + assert l.pop()["paths"] == [venv.envconfig.envbindir] monkeypatch.setenv("PIP_RESPECT_VIRTUALENV", "1") monkeypatch.setenv("PIP_REQUIRE_VIRTUALENV", "1") monkeypatch.setenv("__PYVENV_LAUNCHER__", "1") diff -r fb8ff963c77a5e9189559597cf45cca2a88c03a0 -r 09fd6a94e2812f6cdfddfc6979be9b481af802a3 tests/test_z_cmdline.py --- a/tests/test_z_cmdline.py +++ b/tests/test_z_cmdline.py @@ -263,12 +263,9 @@ }) result = cmd.run("tox") assert not result.ret - assert "platform mismatch" not in result.stdout.str() - result = cmd.run("tox", "-v") - assert not result.ret - result.stdout.fnmatch_lines([ - "*python*platform mismatch*" - ]) + result.stdout.fnmatch_lines(""" + SKIPPED*platform mismatch* + """) def test_skip_unknown_interpreter(cmd, initproj): diff -r fb8ff963c77a5e9189559597cf45cca2a88c03a0 -r 09fd6a94e2812f6cdfddfc6979be9b481af802a3 tox/__init__.py --- a/tox/__init__.py +++ b/tox/__init__.py @@ -1,5 +1,5 @@ # -__version__ = '2.1.0' +__version__ = '2.1.1' from .hookspecs import hookspec, hookimpl # noqa diff -r fb8ff963c77a5e9189559597cf45cca2a88c03a0 -r 09fd6a94e2812f6cdfddfc6979be9b481af802a3 tox/session.py --- a/tox/session.py +++ b/tox/session.py @@ -421,6 +421,9 @@ path.ensure(dir=1) def setupenv(self, venv): + if not venv.matching_platform(): + venv.status = "platform mismatch" + return # we simply omit non-matching platforms action = self.newaction(venv, "getenv", venv.envconfig.envdir) with action: venv.status = 0 @@ -518,9 +521,6 @@ if self.config.option.sdistonly: return for venv in self.venvlist: - if not venv.matching_platform(): - venv.status = "platform mismatch" - continue # we simply omit non-matching platforms if self.setupenv(venv): if venv.envconfig.usedevelop: self.developpkg(venv, self.config.setupdir) @@ -569,7 +569,7 @@ self.report.error(msg) elif status == "platform mismatch": msg = " %s: %s" % (venv.envconfig.envname, str(status)) - self.report.verbosity1(msg) + self.report.skip(msg) elif status and status != "skipped tests": msg = " %s: %s" % (venv.envconfig.envname, str(status)) self.report.error(msg) diff -r fb8ff963c77a5e9189559597cf45cca2a88c03a0 -r 09fd6a94e2812f6cdfddfc6979be9b481af802a3 tox/venv.py --- a/tox/venv.py +++ b/tox/venv.py @@ -366,21 +366,13 @@ os.environ.pop(name, None) cwd.ensure(dir=1) - old = self.patchPATH() - try: - args[0] = self.getcommandpath(args[0], venv, cwd) - env = self._getenv(testcommand=testcommand) - return action.popen(args, cwd=cwd, env=env, - redirect=redirect, ignore_ret=ignore_ret) - finally: - os.environ['PATH'] = old - - def patchPATH(self): - oldPATH = os.environ['PATH'] + args[0] = self.getcommandpath(args[0], venv, cwd) + env = self._getenv(testcommand=testcommand) bindir = str(self.envconfig.envbindir) - os.environ['PATH'] = os.pathsep.join([bindir, oldPATH]) - self.session.report.verbosity2("setting PATH=%s" % os.environ["PATH"]) - return oldPATH + env['PATH'] = p = os.pathsep.join([bindir, os.environ["PATH"]]) + self.session.report.verbosity2("setting PATH=%s" % p) + return action.popen(args, cwd=cwd, env=env, + redirect=redirect, ignore_ret=ignore_ret) def getdigest(path): https://bitbucket.org/hpk42/tox/commits/87a9def32696/ Changeset: 87a9def32696 User: hpk42 Date: 2015-06-23 11:49:45+00:00 Summary: Added tag 2.1.1 for changeset 09fd6a94e281 Affected #: 1 file diff -r 09fd6a94e2812f6cdfddfc6979be9b481af802a3 -r 87a9def32696f7ada3b536621f723018ce9667ad .hgtags --- a/.hgtags +++ b/.hgtags @@ -27,3 +27,4 @@ 2897c9e3a019ee29948cbeda319ffac0e6902053 2.0.1 82561ff2cbf48d8bf5be1384f5f3bd04c805fd30 2.0.2 25c76c46dbdd91126d2c6696cc1b4e97db6588f6 2.1.0 +09fd6a94e2812f6cdfddfc6979be9b481af802a3 2.1.1 Repository URL: https://bitbucket.org/hpk42/tox/ -- This is a commit notification from bitbucket.org. You are receiving this because you have the service enabled, addressing the recipient of this email. From issues-reply at bitbucket.org Tue Jun 23 20:47:48 2015 From: issues-reply at bitbucket.org (Florian Bruhin) Date: Tue, 23 Jun 2015 18:47:48 -0000 Subject: [Pytest-commit] Issue #260: Fatal Python error when running 32bit Python via tox on Windows (hpk42/tox) Message-ID: <20150623184748.25366.33495@app04.ash-private.bitbucket.org> New issue 260: Fatal Python error when running 32bit Python via tox on Windows https://bitbucket.org/hpk42/tox/issue/260/fatal-python-error-when-running-32bit Florian Bruhin: With this tox.ini: ```ini [testenv] commands = {envpython} -c 'print("Hello World")' basepython = C:\Python34_x32\python.exe ``` and a minimal setup.py: ```python from setuptools import setup setup() ``` I get this when running it on Windows: ``` C:\Users\florian\proj\qutebrowser\test>tox -e py34 GLOB sdist-make: C:\Users\florian\proj\qutebrowser\test\setup.py py34 create: C:\Users\florian\proj\qutebrowser\test\.tox\py34 ERROR: invocation failed (exit code 3), logfile: C:\Users\florian\proj\qutebrowser\test\.tox\py34\log\py34-0.log ERROR: actionid: py34 msg: getenv cmdargs: ['C:\\Python34\\python.exe', '-m', 'virtualenv', '--python', 'C:\\Python34_x32\\python.exe', 'py34'] env: {'USERNAME': 'florian', 'SSHSESSIONID': '1003', 'PROGRAMFILES': 'C:\\Program Files', 'SYSTEMROOT': 'C:\\Windows', 'USERDOMAIN': 'qemu-win8', 'COMPUTERNAME': 'QEMU-WIN8', 'PROCESSOR_REVISION': '1a03', 'PYTHON': 'C:\\Python 34', 'PUBLIC': 'C:\\Users\\Public', 'PROMPT': '$P$G', 'PSMODULEPATH': 'C:\\Windows\\system32\\WindowsPowerShell\\v1.0\\Modules\\', 'VIM_EXE_DIR': 'C:\\Program Files (x86)\\Vim\\vim74', 'LOGONSERVER': '\\\\QEMU-WIN8', 'SSH_CLIE NT': '192.168.122.1 39354 22', 'PROGRAMW6432': 'C:\\Program Files', 'TEMP': 'C:\\Users\\florian\\AppData\\Local\\Temp', 'COMMONPROGRAMFILES': 'C:\\Program Files\\Common Files', 'ALLUSERSPROFILE': 'C:\\ProgramData', 'SSHWINGROU P': 'EVERYONE', 'PROGRAMFILES(X86)': 'C:\\Program Files (x86)', 'SSH_CONNECTION': '192.168.122.1 39354 192.168.122.85 22', 'PROCESSOR_LEVEL': '6', 'PROCESSOR_ARCHITECTURE': 'AMD64', 'SYSTEMDRIVE': 'C:', 'PROCESSOR_IDENTIFIER': 'Intel64 Family 6 Model 26 Stepping 3, GenuineIntel', 'TMP': 'C:\\Users\\florian\\AppData\\Local\\Temp', 'HOMEDRIVE': 'C:', 'OS': 'Windows_NT', 'HOME': 'C:\\Users\\florian', 'LOCALAPPDATA': 'C:\\Users\\florian\\AppData\\Local ', 'HOMEPATH': '\\Users\\florian', 'COMMONPROGRAMW6432': 'C:\\Program Files\\Common Files', 'VIRTUAL_ENV': 'C:\\Users\\florian\\proj\\qutebrowser\\test\\.tox\\py34', 'SSHWINUSERDOMAIN': 'qemu-win8', 'NUMBER_OF_PROCESSORS': '1' , 'USERPROFILE': 'C:\\Users\\florian', 'PROGRAMDATA': 'C:\\ProgramData', 'WINDIR': 'C:\\Windows', 'PATH': 'C:\\Users\\florian\\proj\\qutebrowser\\test\\.tox\\py34\\Scripts;C:\\Python34\\Lib\\site-packages\\PyQt5;C:\\Python34_x 32\\Lib\\site-packages\\PyQt5;C:\\asciidoc;C:\\Python34\\;C:\\Python34\\Scripts;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Program Files (x86)\\Git\\cmd;C :\\asciidoc-8.6.9', 'APPDATA': 'C:\\Users\\florian\\AppData\\Roaming', 'WINSSHDGROUP': 'EVERYONE', 'COMMONPROGRAMFILES(X86)': 'C:\\Program Files (x86)\\Common Files', 'SSHWINUSER': 'florian', 'COMSPEC': 'C:\\Windows\\system32\ \cmd.exe', 'PATHEXT': '.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.MSC;.PY', 'FP_NO_HOST_CHECK': 'NO', 'PYTHONHASHSEED': '883', 'USERDOMAIN_ROAMINGPROFILE': 'qemu-win8'} Fatal Python error: Failed to initialize Windows random API (CryptoGen) Running virtualenv with interpreter C:\Python34_x32\python.exe ERROR: InvocationError: C:\Python34\python.exe -m virtualenv --python C:\Python34_x32\python.exe py34 (see C:\Users\florian\proj\qutebrowser\test\.tox\py34\log\py34-0.log) ``` Searching for the error [it seems](http://bugs.python.org/issue20614) this happens when `SYSTEMROOT` is not set. Even though `SYSTEMROOT` is printed in the environment above, I tried to add `passenv = *` and `setenv = SYSTEMROOT=C:\Windows` without any success. When I run that Python install by hand, it works fine. When I use a 64bit install instead of the 32bit one on the same machine, it's fine as well. I also tried downgrading to 1.9.2 and it happens there as well, so it doesn't seem to be a recent `passenv` change. From issues-reply at bitbucket.org Mon Jun 29 14:29:05 2015 From: issues-reply at bitbucket.org (=?utf-8?b?SmFuIFZsxI1pbnNrw70=?=) Date: Mon, 29 Jun 2015 12:29:05 -0000 Subject: [Pytest-commit] Issue #261: `skip_install=true` in fails on building sdist (hpk42/tox) Message-ID: <20150629122905.12382.35157@app12.ash-private.bitbucket.org> New issue 261: `skip_install=true` in fails on building sdist https://bitbucket.org/hpk42/tox/issue/261/skip_install-true-in-fails-on-building Jan Vl?insk?: Issue #11 was resolved by adding `skip_install` into `[testenv]` section. Having following `tox.ini`: ``` #!ini [tox] envlist=py27,py34 [testenv] skip_install=true dep = boto ``` I have expected, there will be two virtualenvs `py27` and `py34` created without trying to install from `setup.py` as I have none in my project. However, it fails at the attempt to build sdist. ``` $ tox Traceback (most recent call last): File "/usr/bin/tox", line 11, in sys.exit(cmdline()) File "/home/javl/.virtualenvs/bin-tox/local/lib/python2.7/site-packages/tox/session.py", line 39, in main retcode = Session(config).runcommand() File "/home/javl/.virtualenvs/bin-tox/local/lib/python2.7/site-packages/tox/session.py", line 373, in runcommand return self.subcommand_test() File "/home/javl/.virtualenvs/bin-tox/local/lib/python2.7/site-packages/tox/session.py", line 518, in subcommand_test path = self.get_installpkg_path() File "/home/javl/.virtualenvs/bin-tox/local/lib/python2.7/site-packages/tox/session.py", line 494, in get_installpkg_path path = self._makesdist() File "/home/javl/.virtualenvs/bin-tox/local/lib/python2.7/site-packages/tox/session.py", line 388, in _makesdist raise tox.exception.MissingFile(setup) tox.MissingFile: MissingFile: /home/javl/.tox/nosetup/setup.py ``` I know, this could be resolved by adding `skipsdist=true` into `[tox]` section, but I would assume, that `skip_install=true` means, there shall be also no attempt to create sdist package. ``` $ python --version Python 2.7.9 $ tox --version 2.1.1 imported from /home/javl/.virtualenvs/bin-tox/local/lib/python2.7/site-packages/tox/__init__.pyc $ lsb_release -a No LSB modules are available. Distributor ID: Debian Description: Debian GNU/Linux 8.1 (jessie) Release: 8.1 Codename: jessie ``` From issues-reply at bitbucket.org Mon Jun 29 21:52:58 2015 From: issues-reply at bitbucket.org (=?utf-8?b?SmFuIFZsxI1pbnNrw70=?=) Date: Mon, 29 Jun 2015 19:52:58 -0000 Subject: [Pytest-commit] Issue #262: tox `-a` option to activate virtualenv only (hpk42/tox) Message-ID: <20150629195258.25400.81080@app01.ash-private.bitbucket.org> New issue 262: tox `-a` option to activate virtualenv only https://bitbucket.org/hpk42/tox/issue/262/tox-a-option-to-activate-virtualenv-only Jan Vl?insk?: Apart from running tests in virtualenvs, `tox` seems to be very good tool for creation of virtualenv. This is what I understood from http://tox.readthedocs.org/en/latest/example/devenv.html I found this feature very handy but now I find myself very often typing: $ source .tox/py27/bin/activate to activate one of the created envs. Proposal: add option `--activate` or `-a` to `tox` allowing just to activate one of created virtualenvs without running tests as follows: $ tox -a py27 (py27) $ Currently I am using function, which does something like that for me: $ function a() { source $1/bin/activate ; } $ a .tox/py27 $ (py27) which has nice advantage, that tab completion for virtualenv directories helps me to hit even very complex names. Anyway, this requires some local system optimization and is not an option for MS Windows (I guess, someone could write a script for MS Windows, but this is extra hurdle). I like the `virtualenvwrapper` tool, which can make using virtualenvs much simpler. Alternative idea is to have new virtualenvwrapper provided command `$ workontox` working in the same manner as it does today by `workon`, but using virtualenvs defined by tox. Alternative name for such command would be `atox`. To make my proposal clear: this issue is proposing addition of `-a` option to `tox`.