From issues-reply at bitbucket.org Sat Feb 1 09:11:08 2014 From: issues-reply at bitbucket.org (=?utf-8?q?Jurko_Gospodneti=C4=87?=) Date: Sat, 01 Feb 2014 08:11:08 -0000 Subject: [Pytest-commit] Issue #438: entering pdb from pytest's internal test suite breaks pytest later (hpk42/pytest) Message-ID: <20140201081108.29587.2304@app11.ash-private.bitbucket.org> New issue 438: entering pdb from pytest's internal test suite breaks pytest later https://bitbucket.org/hpk42/pytest/issue/438/entering-pdb-from-pytests-internal-test Jurko Gospodneti?: Consider the following test case prepared to be run as a part of pytest's internal test suite: ``` class TestJurko: def test_entering_pdb_breaks_pytest(self, testdir): testdir.makepyfile("""\ def test_me(): out = "one\ntwo\nthree" """) import pdb #pdb.set_trace() testdir.runpytest() ``` If you run it *as is*, the test will pass. But if you: 1. uncomment the ```pdb.set_trace()``` 2. run the test 3. use the ```c``` command in it ```pdb``` debugger prompt to continue with the program pytest will break with something similar to: ``` =================================== ERRORS ==================================== _____________ ERROR collecting test_entering_pdb_breaks_pytest.py _____________ C:\Program Files\Python\Python333\lib\site-packages\_pytest\python.py:451: in _importtestmodule > mod = self.fspath.pyimport(ensuresyspath=True) C:\Program Files\Python\Python333\lib\site-packages\py\_path\local.py:620: in pyimport > __import__(modname) E File "c:\users\jurko\appdata\local\temp\pytest-4447\testdir\test_entering_pdb_breaks_pytest0\test_entering_pdb_breaks_pytest.py", line 2 E out = "one E ^ E SyntaxError: EOL while scanning string literal =========================== 1 error in 0.06 seconds =========================== ``` Hope this helps. Best regards, Jurko Gospodneti? From issues-reply at bitbucket.org Sat Feb 1 12:27:39 2014 From: issues-reply at bitbucket.org (=?utf-8?q?Jurko_Gospodneti=C4=87?=) Date: Sat, 01 Feb 2014 11:27:39 -0000 Subject: [Pytest-commit] Issue #439: capsys fixture does not collect the same test output as reported by pytest (hpk42/pytest) Message-ID: <20140201112739.12159.74705@app08.ash-private.bitbucket.org> New issue 439: capsys fixture does not collect the same test output as reported by pytest https://bitbucket.org/hpk42/pytest/issue/439/capsys-fixture-does-not-collect-the-same Jurko Gospodneti?: I noticed that test fixture output gets reported by pytest as part of the output captured for a particular test. However, when you use the capsys fixture to access all the output collected for that test - test fixture output is not included and instead still gets reported as captured output at the end of the test. You can use the following test to illustrate the problem. Just run it as a part of the internal pytest test suite. ``` import fnmatch def test_consistent_reported_and_capsys_test_output(testdir): """ capsys test fixture should allow collecting complete test output. Running a test with the capsys fixture should allow the test to collect the exact same output as reported by pytest when running a matching test without using capsys. Illustrates a defect in pytest 2.5.0 where capsys does not allow accessing fixture output but still reports that output as part of the test's captured output. """ testdir.makepyfile(r"""\ import pytest @pytest.fixture() def ola(request): print(" in the fixture") def do_the_unga_bunga(capsys=None): print(" in the test") if capsys: out, err = capsys.readouterr() for line in out[:-1].split("\n"): print(" %s" % (line[9:],)) pytest.fail() def test_direct(ola): do_the_unga_bunga() def test_with_capsys(ola, capsys): do_the_unga_bunga(capsys) """) result = testdir.runpytest("--tb=short", "-k", "test_direct") output = result.stdout.lines direct_lines = fnmatch.filter(output, "*") capsys_lines = fnmatch.filter(output, "*") assert len(direct_lines) == 2 assert len(capsys_lines) == 0 output_direct = [x[9:] for x in direct_lines] result = testdir.runpytest("--tb=short", "-k", "test_with_capsys") output = result.stdout.lines direct_lines = fnmatch.filter(output, "*") capsys_lines = fnmatch.filter(output, "*") assert len(direct_lines) + len(capsys_lines) == 2 output_capsys = [x[9:] for x in capsys_lines] assert output_direct == output_capsys ``` The test can be instrumented with more precise assertions that would fail earlier, but as it stands, it illustrates that exactly the point I'm trying to make here - that the directly captured output and the output captured via capsys do not match. Hope this helps. Best regards, Jurko Gospodneti? From issues-reply at bitbucket.org Sat Feb 1 14:18:54 2014 From: issues-reply at bitbucket.org (=?utf-8?q?Jurko_Gospodneti=C4=87?=) Date: Sat, 01 Feb 2014 13:18:54 -0000 Subject: [Pytest-commit] Issue #440: parametrized fixture output captured inconsistently (hpk42/pytest) Message-ID: <20140201131854.21823.54631@app18.ash-private.bitbucket.org> New issue 440: parametrized fixture output captured inconsistently https://bitbucket.org/hpk42/pytest/issue/440/parametrized-fixture-output-captured Jurko Gospodneti?: When using parametrized module scoped fixtures, their finalization output gets captured inconsistently. It does not get captured for a test run with the initial parametrization, but tests run using a non-initial parametrization capture output from the previous parametrization's finalization instead. The following test demonstrates the issue. You can run is as a part of the internal pytest test suite: ``` import fnmatch def test_module_fixture_finalizer_output_capture(testdir): """ Parametrized module scoped fixture output should be captured consistently and separately for each test using that fixture. If the fixture code produces output, that output should be consistently captured for every test using any of that fixture's parametrizations - either it should or it should not be captured for every such test, but it must not be captured only for some of them. Also, if a fixture produces output for a specific fixture parametrization, that output must not be captured for tests using a different fixture parametrization. Demonstrates a defect in pytest 2.5.0 where module scoped parametrized fixtures do not get their finalization output captured for their initial parametrization, but each test run using a non-initial parametrization captures finalization output from the previous parametrization. """ testdir.makepyfile(r"""\ import pytest @pytest.fixture(scope="module", params=["A", "B", "C"]) def ola(request): print(" %s - in the fixture" % (request.param,)) class frufru: def __init__(self, param): self.param = param def __call__(self): print(" %s - in the finalizer" % (self.param,)) request.addfinalizer(frufru(request.param)) return request.param def test_me(ola): print(" %s - in the test" % (ola,)) pytest.fail() """) expected_params = "ABC" result = testdir.runpytest("--tb=short", "-q") output = result.stdout.get_lines_after("*=== FAILURES ===*") # Collect reported captured output lines for each test. in_output_block = False test_outputs = [] for line in output: if in_output_block: if line.startswith(" "): test_outputs[-1].append(line[7:]) # Check expected output line formatting. assert line[7] in expected_params assert line[8:].startswith(" - ") else: in_output_block = False elif fnmatch.fnmatch(line, "*--- Captured stdout ---*"): in_output_block = True test_outputs.append([]) else: # Sanity check - no lines except reported output lines should match # our expected output line formatting. assert not line.startswith("") # We ran a single test for each fixture parametrization. assert len(test_outputs) == len(expected_params) content_0 = None for test_param_index, single_test_output in enumerate(test_outputs): # All lines belonging to a single test should report using the same # fixture parameter. param = single_test_output[0][0] for line_index, line in enumerate(single_test_output): assert line[0] == param # All tests should output the same content except for the param value. content = [line[1:] for line in single_test_output] if content_0 is None: content_0 = content else: assert content == content_0 ``` The test could be made shorter and use more precise assertions but I did not want for it to assert the exact logged output, but only that the output be consistent for tests run using all the different parametrizations. Hope this helps. Best regards, Jurko Gospodneti? From issues-reply at bitbucket.org Sat Feb 1 18:36:15 2014 From: issues-reply at bitbucket.org (=?utf-8?q?Jurko_Gospodneti=C4=87?=) Date: Sat, 01 Feb 2014 17:36:15 -0000 Subject: [Pytest-commit] Issue #441: parametrized fixture + module scope + failing test + '-x' option = invalid fixture finalizer output (hpk42/pytest) Message-ID: <20140201173615.25436.39587@app05.ash-private.bitbucket.org> New issue 441: parametrized fixture + module scope + failing test + '-x' option = invalid fixture finalizer output https://bitbucket.org/hpk42/pytest/issue/441/parametrized-fixture-module-scope-failing Jurko Gospodneti?: The following test demonstrates the issue. You can run it as a part of the internal pytest test suite. ``` import fnmatch def test_module_fixture_finalizer_output_capture(testdir): """ Demonstrates a fixture finalizer output capture defect in pytest 2.5.0. pytest should not allow any test script output to be displayed uncontrolled unless its output capture has been disabled. If we have a parametrized module scoped fixture and a failing test using that fixture then running the test suite with the '-x' option seems to produce uncaptured fixture finalizer output (first fixture parametrization only). """ testdir.makepyfile(r"""\ import pytest @pytest.fixture(scope="module", params=["A", "B"]) def ola(request): print(" fixture (%s)" % (request.param,)) class frufru: def __init__(self, param): self.param = param def __call__(self): print(" finalizer <%s>" % (self.param,)) request.addfinalizer(frufru(request.param)) return request.param def test_me(ola): print(" test <%s>" % (ola,)) pytest.fail() """) # Using '--tb=no' should prevent all regularly captured test output to be # displayed. Using '-q' simply removes some irrelevant test output thus # making this external test's failure output shorter. for extra_params in ([], ["-x"]): output = testdir.runpytest("--tb=no", "-q", *extra_params).stdout.lines for line_index, line in enumerate(output): assert "" not in line ``` Hope this helps. Best regards, Jurko Gospodneti? From issues-reply at bitbucket.org Sat Feb 1 22:35:24 2014 From: issues-reply at bitbucket.org (=?utf-8?q?Jurko_Gospodneti=C4=87?=) Date: Sat, 01 Feb 2014 21:35:24 -0000 Subject: [Pytest-commit] Issue #442: Make reported captured output include function scope fixture finalizer output. (hpk42/pytest) Message-ID: <20140201213524.5476.69585@app03.ash-private.bitbucket.org> New issue 442: Make reported captured output include function scope fixture finalizer output. https://bitbucket.org/hpk42/pytest/issue/442/make-reported-captured-output-include Jurko Gospodneti?: My suggestion is to make regular test output capture include output from function scoped fixture finalizers. The same might not be desirable for fixtures with larger scopes as their finalizers are not run for every test using them, and they are not included in this proposal. Here's a scenario that should help you understand why I think this function level fixture behaviour is desirable: Imagine you are a new ptytest user not deeply familiar with pytest specific APIs. And you want your tests to output additional calculated information in case of failure (e.g. some test related output collected from an external process). Outputting that information to stdout will work fine, since pytest will actually display that output only for failing tests. The only remaining question is when to output that information. If it is only a few tests, this output can simply be added in a finally clause wrapping the whole test code, but in case this is needed in a lot of tests, one tries to think of a better solution. The first thing that pops to mind is to implement a fixture that would output this information in its teardown. You check the pytest documentation for 'teardown' and easily find that you implement this with pytest using a function scoped fixture with teardown code contained as its finalizer routine. Now you add the finalizer, add your code and everything should work fine... but alas... it is not. You run your failing test and no output appears. Example: ``` import pytest @pytest.fixture def passwd(request): print("setup before yield") def finalizer(): print("teardown after yield") request.addfinalizer(finalizer) return "gaga" def test_has_lines(passwd): print("test called (%s)" % (passwd,)) pytest.fail() ``` And you're pretty much out of ideas other than the ugly solution to manually wrap all your tests in a try:/finally: as suggested before. You take another stab at the docs, and run into an experimental yield based fixture feature so you decide to try that. You rework your code, but in the end it suffers from the same problem - the teartdown output simply does not get captured. Example: ``` import pytest @pytest.yield_fixture def passwd(): print("setup before yield") yield "gaga" print("teardown after yield") def test_has_lines(passwd): print("test called (%s)" % (passwd,)) pytest.fail() ``` Now you are completely out of options unless you dive into pytest source code or ask someone much more experienced with pytest who can point you in the direction of undocumented pytest report hooks or some other such expose-pytest-bowels solution. Even if you find out how to do this you will have wasted a lot more time on this then you'd like. Adding and/or another pytest specific solution could work, but a new user would still expect the approach above to work, and nothing in his way would warn him that this was a dead end. My suggestion is to just let pytest consistently capture function scope fixture finalizer output (yield based or not). That should allow intuitive pytest usage using standard test framework patterns without having to search for or learn additional pytest specific functionality. If this is added, additional pytest specific functionality could be used (if documented and made public) to improve the solution or make its results even nicer, but pytest should still allow the user to quickly solve his problem in a way consistent with typical testing framework usage. Hope this helps. Best regards, Jurko Gospodneti? From issues-reply at bitbucket.org Sun Feb 2 11:29:50 2014 From: issues-reply at bitbucket.org (=?utf-8?q?Alex_Gr=C3=B6nholm?=) Date: Sun, 02 Feb 2014 10:29:50 -0000 Subject: [Pytest-commit] Issue #443: Misleading documentation on the "skipped/xfail" page (hpk42/pytest) Message-ID: <20140202102950.28606.86921@app13.ash-private.bitbucket.org> New issue 443: Misleading documentation on the "skipped/xfail" page https://bitbucket.org/hpk42/pytest/issue/443/misleading-documentation-on-the-skipped Alex Gr?nholm: The test in the example code is incorrect: ``` #!python @pytest.mark.skipif(sys.version_info >= (3,3), reason="requires python3.3") ``` The test should obviously be sys.version_info < (3, 3) if at least Python 3.3 is required. The test in the next example is also incorrect, for the same reason: ``` #!python minversion = pytest.mark.skipif(mymodule.__versioninfo__ >= (1,1), reason="at least mymodule-1.1 required") ``` From issues-reply at bitbucket.org Mon Feb 3 15:59:16 2014 From: issues-reply at bitbucket.org (jbaiter) Date: Mon, 03 Feb 2014 14:59:16 -0000 Subject: [Pytest-commit] Issue #444: tmpdir: Use pathlib instead of py.path (hpk42/pytest) Message-ID: <20140203145916.28490.88986@app08.ash-private.bitbucket.org> New issue 444: tmpdir: Use pathlib instead of py.path https://bitbucket.org/hpk42/pytest/issue/444/tmpdir-use-pathlib-instead-of-pypath jbaiter: Since Python 3.4 ships with the pathlib module by default and a backport for >=2.7 is available, wouldn't it make sense to use it instead of py.path? From issues-reply at bitbucket.org Tue Feb 4 14:55:09 2014 From: issues-reply at bitbucket.org (Thomas Winwood) Date: Tue, 04 Feb 2014 13:55:09 -0000 Subject: [Pytest-commit] Issue #445: Use test function docstrings to provide nicer names for tests (hpk42/pytest) Message-ID: <20140204135509.11389.15252@app03.ash-private.bitbucket.org> New issue 445: Use test function docstrings to provide nicer names for tests https://bitbucket.org/hpk42/pytest/issue/445/use-test-function-docstrings-to-provide Thomas Winwood: I wrote some tests... ``` #!python def test_empty(): tokens = lexer.lex("") with pytest.raises(StopIteration): assert next(tokens) def test_space(): tokens = lexer.lex(" ") assert next(tokens) == Token("SPACE", " ") def test_tab(): tokens = lexer.lex("\t") assert next(tokens) == Token("TAB", "\t") def test_newline(): tokens = lexer.lex("\n") assert next(tokens) == Token("NEWLINE", "\n") ``` ...and it occurred to me that I could provide docstrings for these functions so the error output when a test fails looks a little friendlier and in some cases is more descriptive than the function name (which tends to be a little terse by necessity). Is this possible, or is there some functionality which already uses the docstring of test functions for something else? From issues-reply at bitbucket.org Tue Feb 4 17:12:42 2014 From: issues-reply at bitbucket.org (leo-the-manic) Date: Tue, 04 Feb 2014 16:12:42 -0000 Subject: [Pytest-commit] Issue #446: Alternative syntax: allow no-argument fixture functions to be used as decorators (hpk42/pytest) Message-ID: <20140204161242.31663.76066@app02.ash-private.bitbucket.org> New issue 446: Alternative syntax: allow no-argument fixture functions to be used as decorators https://bitbucket.org/hpk42/pytest/issue/446/alternative-syntax-allow-no-argument leo-the-manic: Hi pytest team! I just started using this library a week ago and it is fantastic!! I love the detailed test output and the elegant looking test code. Sorry to gush. Anyway, I am using fixtures I don't need direct access to, in exactly the way explained by the docs here: http://pytest.org/latest/fixture.html#using-fixtures-from-classes-modules-or-projects My issue is this: the fixture function itself needs a ``@pytest.fixture`` decorator in order to enter the pytest fixture machinery, then to be used outside of a funcarg, it must be written in a ``@pytest.mark.usefixtures()`` decorator. I find the ``@pytest.mark.usefixtures`` a bit noisy and for some reason something about it simply rubs me the wrong way. My suggestion is to transform functions decorated with ``@pytest.fixture`` into decorators themselves so they can mark the functions/classes where they're used. In other words, turn: ``` #!python @pytest.mark.usefixtures("cleandir") def test_cwd_starts_empty(self): ``` into ``` #!python @cleandir def test_cwd_starts_empty(self): ``` I am not aware of the technical complexity involved in implementing a syntax like this. But if it's possible I think it would be a nice, concise syntax that would be much more in line with the funcargs injection style, in terms of brevity. On top of that it would avoid the "stringly-typed" nature of the usefixtures function. I am aware that other aspects (e.g. funcargs) are also stringly typed, but I still think that letting most linters check the correctness of the decorator calls would be at least a minor gain. Welp, that's all! Thanks for your time =] From issues-reply at bitbucket.org Thu Feb 6 09:27:17 2014 From: issues-reply at bitbucket.org (Paul Oswald) Date: Thu, 06 Feb 2014 08:27:17 -0000 Subject: [Pytest-commit] Issue #447: Fixture params not accessible inside the fixture, not getting called multiple times. (hpk42/pytest) Message-ID: <20140206082717.5058.26791@app06.ash-private.bitbucket.org> New issue 447: Fixture params not accessible inside the fixture, not getting called multiple times. https://bitbucket.org/hpk42/pytest/issue/447/fixture-params-not-accessible-inside-the Paul Oswald: Parametrize decorator will not pass parameters into methods that are on subclasses of TestCase. There is a note at the bottom of http://pytest.org/latest/unittest.html#autouse-fixtures-and-accessing-other-fixtures explaining that this behaviour is intentional. I fully understand the logic behind that but I have quite a few tests that already are written in a way such that the methods accept (self, data) args by using a nose plugin. I need a function that works exactly like the parametrize decorator but will call TestCase methods. In an attempt to work around this issue, I created the following proof of concept but it seems to have some issues: ``` #!python from django.test import TestCase import pytest data_list = [ {"1": 1}, {"2": 2}, ] @pytest.fixture(scope='function', params=data_list) def my_data(request): print request.param @pytest.mark.usefixtures("my_data") class TestParameterizedTest(TestCase): def test_passing(self): assert 1 == 1 def test_multiple(self): print self.my_data assert 1 == 2 ``` When this runs request.param is not defined. Also, request.param [is mentioned](http://pytest.org/latest/fixture.html#parametrizing-a-fixture) in the docs but [not really listed](http://pytest.org/latest/builtin.html#_pytest.python.FixtureRequest) in the request api. ``` request = > @pytest.fixture(scope='function', params=data_list) def my_data(request): > print request.param E AttributeError: SubRequest instance has no attribute 'param' ``` Also, even if you don't try to access request.param, I expected this to parametrize into 2 tests and it doesn't seem to recognise it as being parameterized. From issues-reply at bitbucket.org Thu Feb 6 19:20:19 2014 From: issues-reply at bitbucket.org (=?utf-8?q?Sebastian_Pawlu=C5=9B?=) Date: Thu, 06 Feb 2014 18:20:19 -0000 Subject: [Pytest-commit] Issue #448: pytest-xdist: AttributeError: 'module' object has no attribute '__all__' (hpk42/pytest) Message-ID: <20140206182019.9418.49407@app01.ash-private.bitbucket.org> New issue 448: pytest-xdist: AttributeError: 'module' object has no attribute '__all__' https://bitbucket.org/hpk42/pytest/issue/448/pytest-xdist-attributeerror-module-object Sebastian Pawlu?: ``` #!python ~/buildbot/slave/hippy_1/Linux64/build$ ../../../virtualenv/bin/py.test testing/ -n 5 platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2 plugins: xdist gw0 C / gw1 C / gw2 C / gw3 C / gw4 C[gw1] node down: Traceback (most recent call last): File "/home/buildbot/buildbot/slave/virtualenv/local/lib/python2.7/site-packages/execnet/gateway_base.py", line 1029, in executetask do_exec(co, loc) # noqa File "", line 1, in do_exec File "", line 139, in File "", line 117, in remote_initconfig File "/home/buildbot/buildbot/slave/virtualenv/local/lib/python2.7/site-packages/_pytest/config.py", line 646, in fromdictargs pluginmanager = get_plugin_manager() File "/home/buildbot/buildbot/slave/virtualenv/local/lib/python2.7/site-packages/_pytest/config.py", line 46, in get_plugin_manager pluginmanager.import_plugin(spec) File "/home/buildbot/buildbot/slave/virtualenv/local/lib/python2.7/site-packages/_pytest/core.py", line 230, in import_plugin self.register(mod, modname) File "/home/buildbot/buildbot/slave/virtualenv/local/lib/python2.7/site-packages/_pytest/core.py", line 99, in register reg(plugin, name) File "/home/buildbot/buildbot/slave/virtualenv/local/lib/python2.7/site-packages/_pytest/config.py", line 600, in _register_plugin setns(pytest, dic) File "/home/buildbot/buildbot/slave/virtualenv/local/lib/python2.7/site-packages/_pytest/config.py", line 859, in setns obj.__all__.append(name) AttributeError: 'module' object has no attribute '__all__' ``` other things installed along with pytest and pytest-xdist ``` #!bash :~/buildbot/slave/hippy_1/Linux64/build$ ../../../virtualenv/bin/pip freeze Twisted==13.2.0 argparse==1.2.1 buildbot-slave==0.8.8 execnet==1.2.0 prettytable==0.7.2 py==1.4.20 pyflakes==0.7.3 pytest==2.5.2 pytest-xdist==1.10 rply==0.7.2 wsgiref==0.1.2 zope.interface==4.1.0 ``` From issues-reply at bitbucket.org Sat Feb 8 22:28:42 2014 From: issues-reply at bitbucket.org (=?utf-8?q?Jurko_Gospodneti=C4=87?=) Date: Sat, 08 Feb 2014 21:28:42 -0000 Subject: [Pytest-commit] Issue #450: pytest --fixtures console output should not include text formatted as rst source (hpk42/pytest) Message-ID: <20140208212842.19829.92106@app02.ash-private.bitbucket.org> New issue 450: pytest --fixtures console output should not include text formatted as rst source https://bitbucket.org/hpk42/pytest/issue/450/pytest-fixtures-console-output-should-not Jurko Gospodneti?: When running pytest --fixtures, console output includes text formatted as *rst* source such as: ``` #!text capsys enables capturing of writes to sys.stdout/sys.stderr and makes captured output available via ``capsys.readouterr()`` method calls which return a ``(out, err)`` tuple. ``` or: ``` #!text tmpdir return a temporary directory path object which is unique to each test function invocation, created as a sub directory of the base temporary directory. The returned object is a `py.path.local`_ path object. ``` Such *rst* markup should be removed. I noticed this as something 'really out of place' in the pytest PDF documentation in the '2.1.5 Builtin fixtures/function arguments' section where this output is quoted verbatim. From issues-reply at bitbucket.org Sun Feb 9 13:51:07 2014 From: issues-reply at bitbucket.org (=?utf-8?q?Jurko_Gospodneti=C4=87?=) Date: Sun, 09 Feb 2014 12:51:07 -0000 Subject: [Pytest-commit] Issue #455: Using xdist with -f option runs tests twice on a detected test source code change (hpk42/pytest) Message-ID: <20140209125107.15646.24547@app13.ash-private.bitbucket.org> New issue 455: Using xdist with -f option runs tests twice on a detected test source code change https://bitbucket.org/hpk42/pytest/issue/455/using-xdist-with-f-option-runs-tests-twice Jurko Gospodneti?: When using the xdist plugin and with its ```-f``` option to wait for source changes and automatically rerun the tests when they are detected, the tests seem to be rerun twice for each test code change. Here's what happens: 1. you change a test file, e.g. ```test_me.py``` 2. xdist detects the change and reruns the tests 3. while rerunning the tests, pytest compiles the test file and updates ```test_me.pyc``` 4. xdist detects the ```test_me.pyc``` change and reruns the tests again Seems like a typical usage scenario which should be handled by xdist. It should know that it was the one who created the ```test_me.pyc``` file and used that same file when running the previous test run. Hope this helps. Best regards, Jurko Gospodneti? From issues-reply at bitbucket.org Sat Feb 8 11:06:14 2014 From: issues-reply at bitbucket.org (=?utf-8?q?Jurko_Gospodneti=C4=87?=) Date: Sat, 08 Feb 2014 10:06:14 -0000 Subject: [Pytest-commit] Issue #449: better indicate that there were xpassed test results in a test run (hpk42/pytest) Message-ID: <20140208100614.27365.72559@app01.ash-private.bitbucket.org> New issue 449: better indicate that there were xpassed test results in a test run https://bitbucket.org/hpk42/pytest/issue/449/better-indicate-that-there-were-xpassed Jurko Gospodneti?: This is a usability related enhancement suggestion. When running tests what I really want to know first is: * Did anything unexpected happen? which includes both: * Whether there were any test failures? * Whether any tests marked ```xfail``` suddenly started passing? The first I can get easily at first glance - either by running the tests with the ```-x``` option or by checking the final summary line color (green/red). The latter is a problem though because in order to find whether there were any ```xpassed``` test I have to concentrate a lot harder and either: * Scan the test summary for ```xpassed``` test result indicators (capital letter ```X```) which can be difficult to discern from expected ```xfailed``` (lower letter ```x```) results. * Read the final colored summary line to see if any ```xpassed``` test results occurred. * Run the tests with the '-r X' option and read whether the summary displays any ```xpassed``` test results. One of my projects has a lot of tests marked ```xfail``` and it started to bug me that I often waste a lot of time and interrupt my flow & concentration by having to check the test results in detail just to see if there were any ```xpassed``` test results. My suggestion would be to: * Use a different color (e.g. blue?) if you would otherwise color it green by at least one ```xpassed``` test result was encountered. * Use an exit code other that a simple 0=success in such cases. * Possibly color individual ```failed``` (```F```) & ```xpassed``` (```X```) test result indicators red or red/blue. You might not want to use the new ```xpassed``` result related coloring when running with disabled assertions (```-O```) and you displayed a warning about this possibly causing failing tests to be marked as passed, e.g. when running using an older Python interpreter version. The whole enhancement could possibly be made configurable as well. Hope this helps. Best regards, Jurko Gospodneti? From issues-reply at bitbucket.org Mon Feb 10 01:39:22 2014 From: issues-reply at bitbucket.org (brettatoms) Date: Mon, 10 Feb 2014 00:39:22 -0000 Subject: [Pytest-commit] Issue #456: Test with fixtures that share a parent fixture not getting called (hpk42/pytest) Message-ID: <20140210003922.23048.3472@app05.ash-private.bitbucket.org> New issue 456: Test with fixtures that share a parent fixture not getting called https://bitbucket.org/hpk42/pytest/issue/456/test-with-fixtures-that-share-a-parent brettatoms: I'm not sure how to really describe this but here's a code sample that demonstrates the issue. Basically if I create a test that takes two fixtures as parameters and one of those fixtures depends on the other fixture then the fixture only gets called once and the test method has the same instance for both fixtures. From issues-reply at bitbucket.org Sat Feb 8 23:12:23 2014 From: issues-reply at bitbucket.org (=?utf-8?q?Jurko_Gospodneti=C4=87?=) Date: Sat, 08 Feb 2014 22:12:23 -0000 Subject: [Pytest-commit] Issue #451: Incorrectly formatted PDF documentation tables in chapter 7 (hpk42/pytest) Message-ID: <20140208221223.16099.84633@app02.ash-private.bitbucket.org> New issue 451: Incorrectly formatted PDF documentation tables in chapter 7 https://bitbucket.org/hpk42/pytest/issue/451/incorrectly-formatted-pdf-documentation Jurko Gospodneti?: Tables in [pytest PDB documentation chapter 7](https://pytest.org/latest/pytest.pdf#chapter.7) are badly formatted and extend beyond the right side of the page. From issues-reply at bitbucket.org Sun Feb 9 13:18:54 2014 From: issues-reply at bitbucket.org (=?utf-8?q?Jurko_Gospodneti=C4=87?=) Date: Sun, 09 Feb 2014 12:18:54 -0000 Subject: [Pytest-commit] Issue #454: Console output text color missing when pytest run using xdist's -f option (hpk42/pytest) Message-ID: <20140209121854.17367.90897@app05.ash-private.bitbucket.org> New issue 454: Console output text color missing when pytest run using xdist's -f option https://bitbucket.org/hpk42/pytest/issue/454/console-output-text-color-missing-when Jurko Gospodneti?: When running a test suite using pytest and its xdist plugin without using xdist's ```-f``` command-line option, red/green console output text color gets displayed correctly. However, when running the same test suite using the same pytest command-line options plus the additional ```-f``` command line (to loop on failure) - all output text gets displayed in regular gray color. Environment: ``` #!text Windows 7 x64 SP1 pytest 2.5.1 Python 2.7.6 regular Windows cmd.exe command prompt console ``` Hope this helps. Best regards, Jurko Gospodneti? From issues-reply at bitbucket.org Sat Feb 8 23:31:38 2014 From: issues-reply at bitbucket.org (=?utf-8?q?Jurko_Gospodneti=C4=87?=) Date: Sat, 08 Feb 2014 22:31:38 -0000 Subject: [Pytest-commit] Issue #452: Incorrect capfd description/documentation (hpk42/pytest) Message-ID: <20140208223138.6531.96477@app02.ash-private.bitbucket.org> New issue 452: Incorrect capfd description/documentation https://bitbucket.org/hpk42/pytest/issue/452/incorrect-capfd-description-documentation Jurko Gospodneti?: When you run ```py.test --fixtures``` you get the following output: ``` #!text capfd enables capturing of writes to file descriptors 1 and 2 and makes captured output available via ``capsys.readouterr()`` method calls which return a ``(out, err)`` tuple. ``` which refers to ```capsys``` instead of ```capfd```. From issues-reply at bitbucket.org Sun Feb 9 12:33:15 2014 From: issues-reply at bitbucket.org (Tim Sampson) Date: Sun, 09 Feb 2014 11:33:15 -0000 Subject: [Pytest-commit] Issue #453: assertion rewriting fails with object whose __repr__ contains '{\n' (hpk42/pytest) Message-ID: <20140209113315.22676.18927@app05.ash-private.bitbucket.org> New issue 453: assertion rewriting fails with object whose __repr__ contains '{\n' https://bitbucket.org/hpk42/pytest/issue/453/assertion-rewriting-fails-with-object Tim Sampson: Consider the following: ``` #!python resp = self.user.service.content.get_journal(initial_sync=True) > assert resp.count != 0 v2/test_journal.py:144: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ explanation = 'assert 0\n{0 = Headers: \nContent: {\n "count": 0,\n "items": [],\n "total": 0,\n "journal_max": 2158\n}.count\n} != 0' def format_explanation(explanation): """This formats an explanation Normally all embedded newlines are escaped, however there are three exceptions: \n{, \n} and \n~. The first two are intended cover nested explanations, see function and attribute explanations for examples (.visit_Call(), visit_Attribute()). The last one is for when one explanation needs to span multiple lines, e.g. when displaying diffs. """ explanation = _collapse_false(explanation) lines = _split_explanation(explanation) > result = _format_lines(lines) /usr/local/lib/python2.6/dist-packages/_pytest/assertion/util.py:31: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /_ _ lines = ['assert 0', '{0 = Headers: \\nContent: {\\n "count": 0,\\n "items": [],\\n "total": 0,\\n "journal_max": 2158', '}.count', '} != 0'] def _format_lines(lines): """Format the individual lines This will replace the '{', '}' and '~' characters of our mini formatting language with the proper 'where ...', 'and ...' and ' + ...' text, taking care of indentation along the way. Return a list of formatted lines. """ result = lines[:1] stack = [0] stackcnt = [0] for line in lines[1:]: if line.startswith('{'): if stackcnt[-1]: s = u('and ') else: s = u('where ') stack.append(len(result)) stackcnt[-1] += 1 stackcnt.append(0) result.append(u(' +') + u(' ')*(len(stack)-1) + s + line[1:]) elif line.startswith('}'): assert line.startswith('}') stack.pop() stackcnt.pop() > result[stack[-1]] += line[1:] E IndexError: list index out of range /usr/local/lib/python2.6/dist-packages/_pytest/assertion/util.py:108: /IndexError ``` I'm sure how this can work. As repr(resp) contains the pretty printed json body i.e. it contains '{\n' and '\n}' which apparently confuses format_explanation. I could of course get around this by setting assert=plain and that works fine, but the bare asserts are so nice that I'd like to keep them. Also I guess I could change our code to not include the problematic chars in repr but I would rather not do that either. From issues-reply at bitbucket.org Mon Feb 10 16:26:48 2014 From: issues-reply at bitbucket.org (Floris Bruynooghe) Date: Mon, 10 Feb 2014 15:26:48 -0000 Subject: [Pytest-commit] Issue #457: parser.addoption(type="float") backwards incompatibility (hpk42/pytest) Message-ID: <20140210152648.19666.85089@app05.ash-private.bitbucket.org> New issue 457: parser.addoption(type="float") backwards incompatibility https://bitbucket.org/hpk42/pytest/issue/457/parseraddoption-type-float-backwards Floris Bruynooghe: The parser.addoption(type="...") api has changed from optparse to argparse and the conversion from "int" -> int and "string" -> string is handled however float is not available. This causes existing plugins to fail without any notice. From issues-reply at bitbucket.org Mon Feb 10 18:51:42 2014 From: issues-reply at bitbucket.org (Hernan Grecco) Date: Mon, 10 Feb 2014 17:51:42 -0000 Subject: [Pytest-commit] Issue #151: Specify bugfix Python version (hpk42/tox) Message-ID: <20140210175142.14917.22419@app07.ash-private.bitbucket.org> New issue 151: Specify bugfix Python version https://bitbucket.org/hpk42/tox/issue/151/specify-bugfix-python-version Hernan Grecco: In my system, Tox is using an obsolete Python 2.6 (I guess is 2.6.1, the one installed in the system which I cannot upgrade as I do not have root privileges). I need to test against Python 2.6.5+ but I cannot seem to specify this. Is there any way to specify bugfix Python version to use? From issues-reply at bitbucket.org Mon Feb 10 23:19:05 2014 From: issues-reply at bitbucket.org (space one) Date: Mon, 10 Feb 2014 22:19:05 -0000 Subject: [Pytest-commit] Issue #458: KeyError: 'COV_CORE_SOURCE' when clearing environment (hpk42/pytest) Message-ID: <20140210221905.25382.47092@app09.ash-private.bitbucket.org> New issue 458: KeyError: 'COV_CORE_SOURCE' when clearing environment https://bitbucket.org/hpk42/pytest/issue/458/keyerror-cov_core_source-when-clearing space one: For security purposes i am clearing the environment in my application. This raises the following error in combination with coverage. python2 -m pytest --cov-report html --cov test_broken.py test_broken.py ``` #!python import os def test_broken(): os.environ.clear() ``` of course the .clear() is not done in the test definition but in the code which is tested. ``` #!python test_broken.py .Traceback (most recent call last): File "/usr/lib/python2.7/runpy.py", line 162, in _run_module_as_main "__main__", fname, loader, pkg_name) File "/usr/lib/python2.7/runpy.py", line 72, in _run_code exec code in run_globals File "/usr/lib/python2.7/site-packages/pytest.py", line 10, in raise SystemExit(pytest.main()) File "/usr/lib/python2.7/site-packages/_pytest/config.py", line 19, in main exitstatus = config.hook.pytest_cmdline_main(config=config) File "/usr/lib/python2.7/site-packages/_pytest/core.py", line 368, in __call__ return self._docall(methods, kwargs) File "/usr/lib/python2.7/site-packages/_pytest/core.py", line 379, in _docall res = mc.execute() File "/usr/lib/python2.7/site-packages/_pytest/core.py", line 297, in execute res = method(**kwargs) File "/usr/lib/python2.7/site-packages/_pytest/main.py", line 111, in pytest_cmdline_main return wrap_session(config, _main) File "/usr/lib/python2.7/site-packages/_pytest/main.py", line 104, in wrap_session exitstatus=session.exitstatus) File "/usr/lib/python2.7/site-packages/_pytest/core.py", line 368, in __call__ return self._docall(methods, kwargs) File "/usr/lib/python2.7/site-packages/_pytest/core.py", line 379, in _docall res = mc.execute() File "/usr/lib/python2.7/site-packages/_pytest/core.py", line 297, in execute res = method(**kwargs) File "/usr/lib/python2.7/site-packages/_pytest/terminal.py", line 325, in pytest_sessionfinish __multicall__.execute() File "/usr/lib/python2.7/site-packages/_pytest/core.py", line 297, in execute res = method(**kwargs) File "/usr/lib/python2.7/site-packages/pytest_cov.py", line 98, in pytest_sessionfinish self.cov_controller.finish() File "/usr/lib/python2.7/site-packages/cov_core.py", line 142, in finish self.unset_env() File "/usr/lib/python2.7/site-packages/cov_core.py", line 65, in unset_env del os.environ['COV_CORE_SOURCE'] File "/usr/lib/python2.7/os.py", line 496, in __delitem__ del self.data[key] KeyError: 'COV_CORE_SOURCE' ``` From issues-reply at bitbucket.org Tue Feb 11 07:49:14 2014 From: issues-reply at bitbucket.org (Jason R. Coombs) Date: Tue, 11 Feb 2014 06:49:14 -0000 Subject: [Pytest-commit] Issue #459: doctest-modules + pyreadline means default color becomes black (hpk42/pytest) Message-ID: <20140211064914.24050.7243@app10.ash-private.bitbucket.org> New issue 459: doctest-modules + pyreadline means default color becomes black https://bitbucket.org/hpk42/pytest/issue/459/doctest-modules-pyreadline-means-default Jason R. Coombs: Slightly related to #399, but also different. ![color.jpg](https://bitbucket.org/repo/Kd84B/images/3291019784-color.jpg) Note that the default text color goes black even after the pytest process exits. Thusfar, I've only been able to replicate the problem on Python 3.4.0b3. I have not been able to replicate the issue on Python 3.3. The problem does occur on a vanilla cmd.exe process as well as an ANSI terminal. The problem only occurs if pyreadline is installed and if doctests are invoked. Remove pyreadline or eliminate or disable doctests, and the process completes without any color issues. The presence or absence of colorama has no effect (invoking "py -m pytest" allows the tests to run without requiring colorama to be installed). I've tried disabling much of the doctest output-related functionality in _pytest.doctest, but that had no effect. I've completely cleaned my environment so there's no funny business with other libraries: ``` >>> sys.path ['', 'c:\\python\\lib\\site-packages\\setuptools-2.2-py3.4.egg', 'c:\\users\\jaraco\\projects\\public\\pytest', 'c:\\python\\lib\\site-packages\\colorama-0.2.7-py3.4.egg', 'c:\\python\\lib\\site-packages\\py-1.4.20-py3.4.egg', 'c:\\python\\lib\\site-packages\\pyreadline-2.0-py3.4-win-amd64.egg', 'C:\\WINDOWS\\SYSTEM32\\python34.zip', 'c:\\python\\DLLs', 'c:\\python\\lib', 'c:\\python', 'C:\\Users\\jaraco\\AppData\\Roaming\\Python\\Python34\\site-packages', 'c:\\python\\lib\\site-packages'] ``` So the problem appears to be associated with Python 3.4. Would someone please try replicating this issue in Python 3.4 in your Windows environment? From issues-reply at bitbucket.org Wed Feb 12 01:21:11 2014 From: issues-reply at bitbucket.org (Daniel Nephin) Date: Wed, 12 Feb 2014 00:21:11 -0000 Subject: [Pytest-commit] Issue #152: tox does not export VIRTUAL_ENV to environment (hpk42/tox) Message-ID: <20140212002111.24906.52161@app11.ash-private.bitbucket.org> New issue 152: tox does not export VIRTUAL_ENV to environment https://bitbucket.org/hpk42/tox/issue/152/tox-does-not-export-virtual_env-to Daniel Nephin: The standard virtualenv `bin/activate` script exports an environment variable `$VIRTUAL_ENV` when it starts. `tox` should do the same. From issues-reply at bitbucket.org Wed Feb 12 03:27:53 2014 From: issues-reply at bitbucket.org (Nikolaus Rath) Date: Wed, 12 Feb 2014 02:27:53 -0000 Subject: [Pytest-commit] Issue #460: AttributeError: 'SubRequest' object has no attribute 'param' (hpk42/pytest) Message-ID: <20140212022753.16262.91189@app13.ash-private.bitbucket.org> New issue 460: AttributeError: 'SubRequest' object has no attribute 'param' https://bitbucket.org/hpk42/pytest/issue/460/attributeerror-subrequest-object-has-no Nikolaus Rath: The attached meta-test-case results in ``` #! @pytest.fixture(params=(0,1,2)) def param1(request): > return request.param E AttributeError: 'SubRequest' object has no attribute 'param' ``` with pytest from Mercurial tip. From issues-reply at bitbucket.org Wed Feb 12 04:16:19 2014 From: issues-reply at bitbucket.org (Nikolaus Rath) Date: Wed, 12 Feb 2014 03:16:19 -0000 Subject: [Pytest-commit] Issue #461: Resources warning when combining capfd and skip() (hpk42/pytest) Message-ID: <20140212031619.18501.31608@app02.ash-private.bitbucket.org> New issue 461: Resources warning when combining capfd and skip() https://bitbucket.org/hpk42/pytest/issue/461/resources-warning-when-combining-capfd-and Nikolaus Rath: The attached test script results seems to trigger lots of ResourceWarnings: ``` #! $ py.test-3 ~/tmp/test_bug.py ======================================== test session starts ========================================= platform linux -- Python 3.3.3 -- pytest-2.5.1 collected 4 items ../../tmp/test_bug.py sss. ================================ 1 passed, 3 skipped in 0.01 seconds ================================= Exception ResourceWarning: ResourceWarning("unclosed file <_io.TextIOWrapper name=12 mode='r+' encoding='UTF-8'>",) in <_io.FileIO name=12 mode='rb+'> ignored Exception ResourceWarning: ResourceWarning("unclosed file <_io.TextIOWrapper name=10 mode='r+' encoding='UTF-8'>",) in <_io.FileIO name=10 mode='rb+'> ignored Exception ResourceWarning: ResourceWarning("unclosed file <_io.TextIOWrapper name=17 mode='r+' encoding='UTF-8'>",) in <_io.FileIO name=17 mode='rb+'> ignored Exception ResourceWarning: ResourceWarning("unclosed file <_io.TextIOWrapper name=15 mode='r+' encoding='UTF-8'>",) in <_io.FileIO name=15 mode='rb+'> ignored Exception ResourceWarning: ResourceWarning("unclosed file <_io.TextIOWrapper name=22 mode='r+' encoding='UTF-8'>",) in <_io.FileIO name=22 mode='rb+'> ignored Exception ResourceWarning: ResourceWarning("unclosed file <_io.TextIOWrapper name=20 mode='r+' encoding='UTF-8'>",) in <_io.FileIO name=20 mode='rb+'> ignored ``` From issues-reply at bitbucket.org Wed Feb 12 09:46:49 2014 From: issues-reply at bitbucket.org (Charlie Clark) Date: Wed, 12 Feb 2014 08:46:49 -0000 Subject: [Pytest-commit] Issue #462: What is the correct spelling when using unicode in assertions? (hpk42/pytest) Message-ID: <20140212084649.22409.92678@app08.ash-private.bitbucket.org> New issue 462: What is the correct spelling when using unicode in assertions? https://bitbucket.org/hpk42/pytest/issue/462/what-is-the-correct-spelling-when-using Charlie Clark: I'm really struggling with a way to do this that will work with Python 2 and 3. I have a simple assertion in a file encoded as UTF-8 ` assert ws['A16'].value == '=IF(ISBLANK(B16), "D?sseldorf", B16)'` Unfortunately, this leads to an unholy failure because pytest seems to be running some internal conversion: ``` #!python ___________________________________________________________ test_read_complex_formulae ____________________________________________________________ def test_read_complex_formulae(): null_file = os.path.join(DATADIR, 'reader', 'formulae.xlsx') wb = load_workbook(null_file) ws = wb.get_active_sheet() # Test normal forumlae assert ws.cell('A1').data_type != 'f' assert ws.cell('A2').data_type != 'f' assert ws.cell('A3').data_type == 'f' assert 'A3' not in ws.formula_attributes assert ws.cell('A3').value == '=12345' assert ws.cell('A4').data_type == 'f' assert 'A4' not in ws.formula_attributes assert ws.cell('A4').value == '=A2+A3' assert ws.cell('A5').data_type == 'f' assert 'A5' not in ws.formula_attributes assert ws.cell('A5').value == '=SUM(A2:A4)' # Test unicode > assert ws['A16'].value == '=IF(ISBLANK(B16), "D?sseldorf", B16)' null_file = '/Users/charlieclark/Projects/openpyxl/openpyxl/tests/test_data/reader/formulae.xlsx' wb = ws = openpyxl/tests/test_read.py:261: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ops = ('==',), results = (False,), expls = ('%(py3)s\n{%(py3)s = %(py1)s.value\n} == %(py6)s',) each_obj = ('=IF(ISBLANK(B16), "D?sseldorf", B16)', '=IF(ISBLANK(B16), "D\xc3\xbcsseldorf", B16)') def _call_reprcompare(ops, results, expls, each_obj): for i, res, expl in zip(range(len(ops)), results, expls): try: done = not res except Exception: done = True if done: break if util._reprcompare is not None: > custom = util._reprcompare(ops[i], each_obj[i], each_obj[i + 1]) done = True each_obj = ('=IF(ISBLANK(B16), "D?sseldorf", B16)', '=IF(ISBLANK(B16), "D\xc3\xbcsseldorf", B16)') expl = '%(py3)s\n{%(py3)s = %(py1)s.value\n} == %(py6)s' expls = ('%(py3)s\n{%(py3)s = %(py1)s.value\n} == %(py6)s',) i = 0 ops = ('==',) res = False results = (False,) lib/python2.7/site-packages/_pytest/assertion/rewrite.py:343: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ op = '==', left = '=IF(ISBLANK(B16), "D?sseldorf", B16)', right = '=IF(ISBLANK(B16), "D\xc3\xbcsseldorf", B16)' def callbinrepr(op, left, right): hook_result = item.ihook.pytest_assertrepr_compare( config=item.config, op=op, left=left, right=right) for new_expl in hook_result: if new_expl: # Don't include pageloads of data unless we are very # verbose (-vv) > if (len(py.builtin._totext('').join(new_expl[1:])) > 80*8 and item.config.option.verbose < 2): E UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 22: ordinal not in range(128) hook_result = [['\'=IF(ISBLANK(...eldorf", B16)\' == \'=IF(ISBLANK(B...eldorf", B16)\'', '- =IF(ISBLANK(B16), "D?sseldorf", B16)', '? ^', '+ =IF(ISBLANK(B16), "D\xc3\xbcsseldorf", B16)', '? ^^']] item = left = '=IF(ISBLANK(B16), "D?sseldorf", B16)' new_expl = ['\'=IF(ISBLANK(...eldorf", B16)\' == \'=IF(ISBLANK(B...eldorf", B16)\'', '- =IF(ISBLANK(B16), "D?sseldorf", B16)', '? ^', '+ =IF(ISBLANK(B16), "D\xc3\xbcsseldorf", B16)', '? ^^'] op = '==' right = '=IF(ISBLANK(B16), "D\xc3\xbcsseldorf", B16)' lib/python2.7/site-packages/_pytest/assertion/__init__.py:83: UnicodeDecodeError ----------------------------------------------------------------- Captured stderr ----------------------------------------------------------------- /Users/charlieclark/Projects/openpyxl/openpyxl/tests/test_read.py:261: UnicodeWarning: Unicode equal comparison failed to convert both arguments to Unicode - interpreting them as being unequal assert ws['A16'].value == '=IF(ISBLANK(B16), "D?sseldorf", B16)' /Users/charlieclark/Projects/openpyxl/lib/python2.7/site-packages/_pytest/assertion/util.py:178: UnicodeWarning: Unicode unequal comparison failed to convert both arguments to Unicode - interpreting them as being unequal if left[i] != right[i]: /opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/difflib.py:439: UnicodeWarning: Unicode equal comparison failed to convert both arguments to Unicode - interpreting them as being unequal a[besti+bestsize] == b[bestj+bestsize]: /opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/difflib.py:978: UnicodeWarning: Unicode equal comparison failed to convert both arguments to Unicode - interpreting them as being unequal if ai == bj: /opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/difflib.py:435: UnicodeWarning: Unicode equal comparison failed to convert both arguments to Unicode - interpreting them as being unequal a[besti-1] == b[bestj-1]: ``` I've seen similar issues reported but nothing telling me the correct way to write the test. From issues-reply at bitbucket.org Fri Feb 14 11:22:58 2014 From: issues-reply at bitbucket.org (bgr_) Date: Fri, 14 Feb 2014 10:22:58 -0000 Subject: [Pytest-commit] Issue #463: Add alias for parametrize or provide better error reporting (hpk42/pytest) Message-ID: <20140214102258.13173.12881@app13.ash-private.bitbucket.org> New issue 463: Add alias for parametrize or provide better error reporting https://bitbucket.org/hpk42/pytest/issue/463/add-alias-for-parametrize-or-provide bgr_: Consider adding an alias paramet**e**rize for parametrize, since it's a valid spelling but using it in pytest will give a misleading error message - for the code below error message will say *"fixture 'arg' not found"*, not mentioning that paramet**e**rize doesn't exist. ``` #!python import pytest @pytest.mark.parameterize('arg, expected', [ (1, 1), ]) def test_confusion(arg, expected): assert arg == expected ``` From issues-reply at bitbucket.org Mon Feb 17 14:12:14 2014 From: issues-reply at bitbucket.org (vmalloc) Date: Mon, 17 Feb 2014 13:12:14 -0000 Subject: [Pytest-commit] Issue #464: Allow mixtures of fixture + parameterization marks (hpk42/pytest) Message-ID: <20140217131214.10419.60142@app02.ash-private.bitbucket.org> New issue 464: Allow mixtures of fixture + parameterization marks https://bitbucket.org/hpk42/pytest/issue/464/allow-mixtures-of-fixture-parameterization vmalloc: Given the following code: ```python import pytest @pytest.fixture def arg1(): return 1 @pytest.mark.parameterize("arg2", [2, 22]) def test(arg1, arg2): pass ``` py.test complains that arg2 isn't a known fixture. However, in this case it could have been deduced pretty easily, since there is an explicit parameterization of the *arg2* parameter. Being able to mix the two fixture can be incredibly useful: ```python @pytest.mark.parameterize("with_https", [True, False]) def test_webserver(webserver, with_https): ... ``` In the above example, webserver can be obtained from a fixture (thus benefitting from finalization and more), while with_https is a mere boolean flag, and would be an overkill to turn into a full-blown fixture function. From issues-reply at bitbucket.org Tue Feb 18 01:09:10 2014 From: issues-reply at bitbucket.org (Lars Hupfeldt Nielsen) Date: Tue, 18 Feb 2014 00:09:10 -0000 Subject: [Pytest-commit] Issue #465: running pytest.main(...) from python script breaks cov plugin (hpk42/pytest) Message-ID: <20140218000910.27199.38506@app08.ash-private.bitbucket.org> New issue 465: running pytest.main(...) from python script breaks cov plugin https://bitbucket.org/hpk42/pytest/issue/465/running-pytestmain-from-python-script Lars Hupfeldt Nielsen: cov plugin works when executing py.test, showing 84% coverage when calling pytests.main() only 64% coverage is shown From issues-reply at bitbucket.org Thu Feb 20 18:09:46 2014 From: issues-reply at bitbucket.org (Steven R) Date: Thu, 20 Feb 2014 17:09:46 -0000 Subject: [Pytest-commit] Issue #466: IOError when writing --junitxml report (hpk42/pytest) Message-ID: <20140220170946.18251.94915@app03.ash-private.bitbucket.org> New issue 466: IOError when writing --junitxml report https://bitbucket.org/hpk42/pytest/issue/466/ioerror-when-writing-junitxml-report Steven R: **Version**: pytest-2.4.2-py2.7 **Related bug 1**: https://bitbucket.org/hpk42/pytest/issue/271/ioerror-when-writing-junitxml-report-when **Related bug 2**: https://bitbucket.org/hpk42/pytest/issue/173/pytest-fails-when-writing-junit-style-test **Frequency**: Intermittent 1/20 **Note**: This may be related to new file or folder structure on clean systems. I'm only seeing this on the initial runs of the test on new systems. Traceback (most recent call last): File "C:\Jenkins\workspace\Test-Stress-ServerRegression\sw\tools\win32\python\275\lib\runpy.py", line 162, in _run_module_as_main "__main__", fname, loader, pkg_name) File "C:\Jenkins\workspace\Test-Stress-ServerRegression\sw\tools\win32\python\275\lib\runpy.py", line 72, in _run_code exec code in run_globals File "C:\Jenkins\workspace\Test-Stress-ServerRegression\sw\tools\win32\python\275\lib\site-packages\pytest-2.4.2-py2.7.egg\pytest.py", line 10, in raise SystemExit(pytest.main()) File "C:\Jenkins\workspace\Test-Stress-ServerRegression\sw\tools\win32\python\275\lib\site-packages\pytest-2.4.2-py2.7.egg\_pytest\config.py", line 19, in main exitstatus = config.hook.pytest_cmdline_main(config=config) File "C:\Jenkins\workspace\Test-Stress-ServerRegression\sw\tools\win32\python\275\lib\site-packages\pytest-2.4.2-py2.7.egg\_pytest\core.py", line 368, in __call__ return self._docall(methods, kwargs) File "C:\Jenkins\workspace\Test-Stress-ServerRegression\sw\tools\win32\python\275\lib\site-packages\pytest-2.4.2-py2.7.egg\_pytest\core.py", line 379, in _docall res = mc.execute() File "C:\Jenkins\workspace\Test-Stress-ServerRegression\sw\tools\win32\python\275\lib\site-packages\pytest-2.4.2-py2.7.egg\_pytest\core.py", line 297, in execute res = method(**kwargs) File "C:\Jenkins\workspace\Test-Stress-ServerRegression\sw\tools\win32\python\275\lib\site-packages\pytest-2.4.2-py2.7.egg\_pytest\main.py", line 111, in pytest_cmdline_main return wrap_session(config, _main) File "C:\Jenkins\workspace\Test-Stress-ServerRegression\sw\tools\win32\python\275\lib\site-packages\pytest-2.4.2-py2.7.egg\_pytest\main.py", line 104, in wrap_session exitstatus=session.exitstatus) File "C:\Jenkins\workspace\Test-Stress-ServerRegression\sw\tools\win32\python\275\lib\site-packages\pytest-2.4.2-py2.7.egg\_pytest\core.py", line 368, in __call__ return self._docall(methods, kwargs) File "C:\Jenkins\workspace\Test-Stress-ServerRegression\sw\tools\win32\python\275\lib\site-packages\pytest-2.4.2-py2.7.egg\_pytest\core.py", line 379, in _docall res = mc.execute() File "C:\Jenkins\workspace\Test-Stress-ServerRegression\sw\tools\win32\python\275\lib\site-packages\pytest-2.4.2-py2.7.egg\_pytest\core.py", line 297, in execute res = method(**kwargs) File "C:\Jenkins\workspace\Test-Stress-ServerRegression\sw\tools\win32\python\275\lib\site-packages\pytest-2.4.2-py2.7.egg\_pytest\terminal.py", line 325, in pytest_sessionfinish __multicall__.execute() File "C:\Jenkins\workspace\Test-Stress-ServerRegression\sw\tools\win32\python\275\lib\site-packages\pytest-2.4.2-py2.7.egg\_pytest\core.py", line 297, in execute res = method(**kwargs) File "C:\Jenkins\workspace\Test-Stress-ServerRegression\sw\tools\win32\python\275\lib\site-packages\pytest-2.4.2-py2.7.egg\_pytest\junitxml.py", line 209, in pytest_sessionfinish logfile = py.std.codecs.open(self.logfile, 'w', encoding='utf-8') File "C:\Jenkins\workspace\Test-Stress-ServerRegression\sw\tools\win32\python\275\lib\codecs.py", line 881, in open file = __builtin__.open(filename, mode, buffering) IOError: [Errno 2] No such file or directory: 'C:\\Jenkins\\workspace\\Test-Stress-ServerRegression\\Logs\\project_stress_test_results.xml' From issues-reply at bitbucket.org Fri Feb 21 16:44:03 2014 From: issues-reply at bitbucket.org (Floris Bruynooghe) Date: Fri, 21 Feb 2014 15:44:03 -0000 Subject: [Pytest-commit] Issue #467: Cache fixtures which raise pytest.skip.Exception and pytest.fail.Exception (hpk42/pytest) Message-ID: <20140221154403.25532.13604@app05.ash-private.bitbucket.org> New issue 467: Cache fixtures which raise pytest.skip.Exception and pytest.fail.Exception https://bitbucket.org/hpk42/pytest/issue/467/cache-fixtures-which-raise Floris Bruynooghe: Something I do quite a lot in fixtures is calling `pytest.skip(...)` or `pytest.fail(...)` in session-scoped fixtures, usually for a service like a database server which is not available. However even though the fixture is scoped on the session py.test will not cache the exception raised from the fixture so the fixture will be executed again and again trying to connect to the same server over and over. Currently I solve this by caching the skip result manually on the fixture so that the fixture code can skip early, but I think it would be nice if py.test cached the exceptions raised from the fixture. While caching a generic exception might not be suitable I'm proposing to at least consider this for pytest.skip.Exception and pytest.fail.Exception. From issues-reply at bitbucket.org Mon Feb 24 17:54:56 2014 From: issues-reply at bitbucket.org (m27315) Date: Mon, 24 Feb 2014 16:54:56 -0000 Subject: [Pytest-commit] Issue #468: guaranteed print to stdout and stderr (hpk42/pytest) Message-ID: <20140224165456.1076.41950@app02.ash-private.bitbucket.org> New issue 468: guaranteed print to stdout and stderr https://bitbucket.org/hpk42/pytest/issue/468/guaranteed-print-to-stdout-and-stderr m27315: Occasionally, it is desirable to print directly to stdout and stderr, regardless of '-s' switch invocation. For example, one may want to print an occasional debug message during testing without wading through the stdout and stderr for every test. Or, one may want to print a custom message, like coverage information at the end of all tests. Would it be possible to add 2 new pytest methods that always print to stdout and stderr bypassing all stdout and stderr capturing? This would greatly simplify test debug and custom report generation. From issues-reply at bitbucket.org Tue Feb 25 20:51:36 2014 From: issues-reply at bitbucket.org (kwpolska) Date: Tue, 25 Feb 2014 19:51:36 -0000 Subject: [Pytest-commit] Issue #470: Crashes when too many things fail? (hpk42/pytest) Message-ID: <20140225195136.3952.32594@app06.ash-private.bitbucket.org> New issue 470: Crashes when too many things fail? https://bitbucket.org/hpk42/pytest/issue/470/crashes-when-too-many-things-fail kwpolska: I?m testing [Nikola](http://getnikola.com/). I tried [this file](https://github.com/getnikola/nikola/blob/master/tests/test_integration.py): ``` $ py.test tests/test_integration.py ============================= test session starts ============================== platform linux2 -- Python 2.7.6 -- py-1.4.20 -- pytest-2.5.2 collected 53 items tests/test_integration.py ..................................................... ========================== 53 passed in 97.54 seconds ========================== ``` Then I broke something. Specifically, [commented out the building part](https://github.com/getnikola/nikola/blob/master/tests/test_integration.py#L41). ``` ============================= test session starts ============================== platform linux2 -- Python 2.7.6 -- py-1.4.20 -- pytest-2.5.2 collected 53 items tests/test_integration.py FFFFFFFFFFFFFF.FFFFFFFF..FFF..FFF..FFFFFF$ echo $? 3 ``` It just crashes. No idea why. I included the debug log, it says something to the effect of the CWD being inexistent. Then I asked --verbose. It crashes at this: ``` tests/test_integration.py:108: RelativeLinkTest2.test_avoid_double_slash_in_rss ``` I tried to isolate it. Here?s the fun part: when I removed everything that was not this test case or what it inherited from, I got a build that actually finished: ``` ============================= test session starts ============================== platform linux2 -- Python 2.7.6 -- py-1.4.20 -- pytest-2.5.2 -- /home/kwpolska/virtualenvs/nikola-py2/bin/python2 collected 8 items tests/test_integration.py:80: EmptyBuildTest.test_build FAILED tests/test_integration.py:108: DemoBuildTest.test_avoid_double_slash_in_rss FAILED tests/test_integration.py:80: DemoBuildTest.test_build FAILED tests/test_integration.py:103: DemoBuildTest.test_index_in_sitemap FAILED tests/test_integration.py:108: RelativeLinkTest2.test_avoid_double_slash_in_rss FAILED tests/test_integration.py:80: RelativeLinkTest2.test_build FAILED tests/test_integration.py:147: RelativeLinkTest2.test_index_in_sitemap FAILED tests/test_integration.py:133: RelativeLinkTest2.test_relative_links FAILED =================================== FAILURES =================================== ?snip? _______________ RelativeLinkTest2.test_avoid_double_slash_in_rss _______________ self = def test_avoid_double_slash_in_rss(self): rss_path = os.path.join(self.target_dir, "output", "rss.xml") > rss_data = codecs.open(rss_path, "r", "utf8").read() tests/test_integration.py:110: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ filename = '/tmp/tmpk9BlUY/target/output/rss.xml', mode = 'rb' encoding = 'utf8', errors = 'strict', buffering = 1 def open(filename, mode='rb', encoding=None, errors='strict', buffering=1): """ Open an encoded file using the given mode and return a wrapped version providing transparent encoding/decoding. ?snip? """ if encoding is not None: if 'U' in mode: # No automatic conversion of '\n' is done on reading and writing mode = mode.strip().replace('U', '') if mode[:1] not in set('rwa'): mode = 'r' + mode if 'b' not in mode: # Force opening of the file in binary mode mode = mode + 'b' > file = __builtin__.open(filename, mode, buffering) E IOError: [Errno 2] No such file or directory: u'/tmp/tmpk9BlUY/target/output/rss.xml' ../../virtualenvs/nikola-py2/lib/python2.7/codecs.py:881: IOError ?snip? =========================== 8 failed in 0.51 seconds =========================== ``` What might be causing this? From issues-reply at bitbucket.org Tue Feb 25 14:19:15 2014 From: issues-reply at bitbucket.org (Andrei Baidarov) Date: Tue, 25 Feb 2014 13:19:15 -0000 Subject: [Pytest-commit] Issue #469: junitxml parses report.nodeid incorrectly (hpk42/pytest) Message-ID: <20140225131915.17722.14553@app10.ash-private.bitbucket.org> New issue 469: junitxml parses report.nodeid incorrectly https://bitbucket.org/hpk42/pytest/issue/469/junitxml-parses-reportnodeid-incorrectly Andrei Baidarov: If test param contains '::' junitxml parses report.nodeid incorrectly. For example, for test ``` #!python import pytest @pytest.mark.parametrize('foo', ['1::2']) def testA(foo): assert True ``` py.test --junitxml=1 test.py produces xml with classname="test.testA[1" and name="2]" instead of classname="test" and name="testA[1::2]". From issues-reply at bitbucket.org Thu Feb 27 00:00:31 2014 From: issues-reply at bitbucket.org (klmitch) Date: Wed, 26 Feb 2014 23:00:31 -0000 Subject: [Pytest-commit] Issue #153: Support running tests on an existant virtual environment (hpk42/tox) Message-ID: <20140226230031.20467.94465@app10.ash-private.bitbucket.org> New issue 153: Support running tests on an existant virtual environment https://bitbucket.org/hpk42/tox/issue/153/support-running-tests-on-an-existant klmitch: We have a situation where we need to build our own virtual environment, then run tests within that virtual environment to validate it for deployment. It would be useful if there were a command-line option to tox that we could use to tell it to skip constructing the virtual environment and use one in a directory that we specify. If this option were used, tox would need to skip all of its normal virtual environment building/refreshing and simply proceed to run the commands for the designated -e option(s) with the specified virtual environment active; perhaps something like: tox -e py26,pep8 -V /home/deploy/v1234/virtualenv From issues-reply at bitbucket.org Thu Feb 27 02:49:56 2014 From: issues-reply at bitbucket.org (Mantas Zilinskis) Date: Thu, 27 Feb 2014 01:49:56 -0000 Subject: [Pytest-commit] Issue #471: pytest keeps running deleted tests (hpk42/pytest) Message-ID: <20140227014956.21099.27876@app12.ash-private.bitbucket.org> New issue 471: pytest keeps running deleted tests https://bitbucket.org/hpk42/pytest/issue/471/pytest-keeps-running-deleted-tests Mantas Zilinskis: i just started using pytest for my django projects. couple days ago playing around with a test project i noticed that pytest sometimes would run some test functions which were already deleted. did not pay attention to that then. today i started new django project with pytest-bdd, pytest-django. after creating some test function, ran tests. then decided to remove one test function. ran the test again and noticed that pytest collected the deleted test. removed all pyc files from working direcotry, cleared all __pycahe__ directories, ran tests and it still collected the deleted test. whatever i tried wouldn't help. i'm running on virtualenv, python 2.7, django 1.6., MacOSX 10.9.1 what would be the way to debug this?