From holger at merlinux.eu Mon Dec 1 10:45:37 2014 From: holger at merlinux.eu (holger krekel) Date: Mon, 1 Dec 2014 10:45:37 +0100 Subject: [pytest-dev] Help with parametrized fixtures In-Reply-To: <87a93c8erk.fsf@vostro.rath.org> References: <87oartg1cf.fsf@rath.org> <54764697.2090802@oddbird.net> <87a93c8erk.fsf@vostro.rath.org> Message-ID: <20141201094537.GB4295@merlinux.eu> On Thu, Nov 27, 2014 at 14:22 -0800, Nikolaus Rath wrote: > Carl Meyer writes: > > Hi Nikolaus, > > > > On 11/26/2014 01:21 PM, Nikolaus Rath wrote: > >> I have a parametrized fixture foo, and some test cases that use it: > >> > >> @pytest.fixture(params=(1,2,3,4,5)) > >> def foo(request): > >> # ... > >> return foo_f > >> > >> def test_1(foo): > >> # ... > >> > >> def test_2(foo): > >> # .... > >> > >> Now, I'd like to add an additional test, but running it only makes sense > >> for some of the fixture parameters. So far I've solved this by providing > >> the parameter value as a fixture attribute and skipping "unsupported" values: > >> > >> @pytest.fixture(params=(1,2,3,4,5)) > >> def foo(request): > >> # ... > >> foo_f.param = request.param > >> return foo_f > >> > >> def test_3(foo): > >> if foo.param not in (1,4,5): > >> raise pytest.skip("doesn't make sense") > >> > >> This works, but I don't like it very much because it means that the test > >> suite can never be executed without skipping some tests. I'd rather > >> reserve skipping for cases where the test could in principle be > >> executed, but for some reason cannot work at the moment (e.g. because > >> there's no internet, or a required utility is not installed). > >> > >> Is there a way to solve the above scenario without skipping tests? > >> > >> Ideally, I'd be able to do something like this: > >> > >> @only_for_param((1,4,5)) > >> def test_3(foo): > >> ... > > > > I think in order to do this you'll need to use the > > `pytest_generate_tests` hook: > > http://pytest.org/latest/parametrize.html#pytest-generate-tests > > Thanks for the link! The following seems to work nicely: > > #!/usr/bin/env python3 > import pytest > > @pytest.fixture() > def bar(request): > s = 'bar-%d' % request.param > print('preparing', s) > return s > > def test_one(bar): > pass > > def test_two(bar): > pass > test_two.test_with = (2,3) > > def pytest_generate_tests(metafunc): > params = (1,2,3,4) > if not 'bar' in metafunc.fixturenames: > return > > test_with = getattr(metafunc.function, 'test_with', None) > if test_with: > params = test_with > metafunc.parametrize("bar", params, indirect=True) > > > Unfortunately, it seems there are several issues with parametrizing > fixtures these way that are probably going to bite me as things get more > complex: > > https://bitbucket.org/hpk42/pytest/issue/635 > https://bitbucket.org/hpk42/pytest/issue/634 > https://bitbucket.org/hpk42/pytest/issue/531 > https://bitbucket.org/hpk42/pytest/issue/519 > > > (Maybe mentioning them here gives them more attention :-). Heh, you are touching the edges of what is currently possible and implemented with respect to parametrization, including what are probably real bugs. For my own uses of parametrization those edges usually did not play a role yet IIRC. If you need help in trying to fix things yourself let me know. If you don't want to tackle that it's fine as well, maybe somebody else gets to it. In any case, thanks for creating the detailed issues! best, holger From holger at merlinux.eu Mon Dec 1 10:52:46 2014 From: holger at merlinux.eu (holger krekel) Date: Mon, 1 Dec 2014 10:52:46 +0100 Subject: [pytest-dev] Access markers from pytest_generate_tests? In-Reply-To: <87wq6ggqmy.fsf@vostro.rath.org> References: <87wq6ggqmy.fsf@vostro.rath.org> Message-ID: <20141201095246.GC4295@merlinux.eu> Hi Nikolaus, On Thu, Nov 27, 2014 at 15:39 -0800, Nikolaus Rath wrote: > Hello, > > Is there a way to access test markers sin the pytest_generate_tests > hook? > > For example: > > @pytest.mark.bla > def test_one(): > # ... > > def pytest_generate_tests(metafunc): > if metafunc_has_bla_mark(metafunc): > # do something > > > But what do you have to use for metafunc_has_bla_mark? Unfortunately > metafunc.function does not have a `keywords` attribute... Right now you probably have to use hasattr(metafunc.function, "somename") but i agree that adding an API to metafunc is much better. It should be something like: metafunc.get_marker(...) which is similar to the existing: item.get_marker(...) which you can use in other hooks and contexts. best, holger From flub at devork.be Thu Dec 4 22:14:19 2014 From: flub at devork.be (Floris Bruynooghe) Date: Thu, 4 Dec 2014 21:14:19 +0000 Subject: [pytest-dev] Pytest fails with the wrong error when unicode_literals is present In-Reply-To: References: Message-ID: Hi, So if I understand all this correctly there are two things going on: 1) There's a bug with re-interpret which does not respect the unicode_literals from the original source. 2) re-interpret is being used instead of rewrite because an assert occurs in a file which is not a test module itself. The second here is basically the same as https://bitbucket.org/hpk42/pytest/issue/612/syntax-error-on-assertion-reinterpretation IIUC. To be perfectly honest my personal reaction to this is and has always been (before I was aware that this is a way to trigger re-interpret) that one should not have assertions in fixtures. Now experience seems to indicate I might be rather lonely in that opinion so maybe that's not a too useful stance. I've been trying to think about this a few times but haven't really found a useful answer to this problem yet. As for the former issue here: I'm not sure if anyone is going to have a lot of enthusiasm to fix this given that I think we would kind of like to drop re-interpret at some point. I suspect some features have been sneaking in which do not properly work in re-interpret by now and it's generally not been getting much love since rewrite came about. So I think it's much more important to try and find a solution for problem of re-write being avoided. But obviously a pull request with a fix would still be gratefully accepted. Regards, Floris On 25 November 2014 at 12:49, Bruno Oliveira wrote: > Hi everyone, > > Besides the problem that reinterpret is executing code in a context without > unicode_literals enabled, this also showcases that when moving a fixture > with has asserts on it from a test file to a conftest file, asserts will not > be rewritten anymore and reinterpretation will be used instead, which might > cause unexpected issues, as illustrated by Gustavo. > > The first issue looks like a bug to me, not sure what to do about this > second point. Any thoughts? > > Cheers, > > On Mon, Nov 24, 2014 at 1:51 PM, Edison Gustavo Muenz > wrote: >> >> TL;DR; >> >> The eval() method of the Frame object is not ?inheriting? the __future__ >> imports from the frame. >> >> Long explanation >> >> Suppose I have a fixture in which I write my assert statements. If this >> fixture has strings on the assert statement then when the assert fails I get >> the wrong kind of errors. >> >> The problem is that the pytest assertion rewrite mechanism is calling >> eval() on the code that contains the frame >> >> To better illustrate what happens, see this gist >> >> The tests have the following output: >> >> test_plugin_fixture - fail >> test_conftest_fixture - fail >> test_inline_fixture - pass >> test_no_fixture - pass >> >> The output of test_plugin_fixture (omitted some output for brevity): >> >> $ pytest -k test_plugin_fixture >> >> msg = e.value.msg >> > assert msg.startswith('assert > > msg >> E AssertionError: ValueError: ValueAndString(2, 'lkajflaskjfalskf') >> << Error! >> >> While debugging I found out that the ValueAndString constructor was being >> called 3 times, instead of 2. The 3rd call happened in code.py:100, which is >> part of the py lib library. The ?wrong? behaviour can be seen on the >> run_pyframe_eval.py. >> >> I?ve tested this code on Pytest 2.6.4 with Python 2.7.8 in Windows 7 - >> 64bits. >> >> >> _______________________________________________ >> pytest-dev mailing list >> pytest-dev at python.org >> https://mail.python.org/mailman/listinfo/pytest-dev >> > > > _______________________________________________ > pytest-dev mailing list > pytest-dev at python.org > https://mail.python.org/mailman/listinfo/pytest-dev > From nicoddemus at gmail.com Thu Dec 4 23:32:04 2014 From: nicoddemus at gmail.com (Bruno Oliveira) Date: Thu, 4 Dec 2014 20:32:04 -0200 Subject: [pytest-dev] Pytest fails with the wrong error when unicode_literals is present In-Reply-To: References: Message-ID: Hi Floris, On Thu, Dec 4, 2014 at 7:14 PM, Floris Bruynooghe wrote: > > 1) There's a bug with re-interpret which does not respect the > unicode_literals from the original source. > > 2) re-interpret is being used instead of rewrite because an assert > occurs in a file which is not a test module itself. The second here is basically the same as > > https://bitbucket.org/hpk42/pytest/issue/612/syntax-error-on-assertion-reinterpretation > IIUC. > Exactly. :) To be perfectly honest my personal reaction to this is and has always > been (before I was aware that this is a way to trigger re-interpret) > that one should not have assertions in fixtures. Now experience seems > to indicate I might be rather lonely in that opinion so maybe that's > not a too useful stance. I've been trying to think about this a few > times but haven't really found a useful answer to this problem yet. > Since pytest knows all fixtures, and indirectly their source files, perhaps a solution would be to also rewrite asserts in all modules where fixtures are defined. > As for the former issue here: I'm not sure if anyone is going to have > a lot of enthusiasm to fix this given that I think we would kind of > like to drop re-interpret at some point. I suspect some features have > been sneaking in which do not properly work in re-interpret by now and > it's generally not been getting much love since rewrite came about. > So I think it's much more important to try and find a solution for > problem of re-write being avoided. But obviously a pull request with > a fix would still be gratefully accepted. > It seems doable: Frame uses eval() to re-evaluate the failed assert expression; it would have to be changed use compile() passing the appropriate flags, which can be obtained from the original code object from the frame with the failed assert. Cheers, Bruno On 25 November 2014 at 12:49, Bruno Oliveira wrote: > > Hi everyone, > > > > Besides the problem that reinterpret is executing code in a context > without > > unicode_literals enabled, this also showcases that when moving a fixture > > with has asserts on it from a test file to a conftest file, asserts will > not > > be rewritten anymore and reinterpretation will be used instead, which > might > > cause unexpected issues, as illustrated by Gustavo. > > > > The first issue looks like a bug to me, not sure what to do about this > > second point. Any thoughts? > > > > Cheers, > > > > On Mon, Nov 24, 2014 at 1:51 PM, Edison Gustavo Muenz > > wrote: > >> > >> TL;DR; > >> > >> The eval() method of the Frame object is not ?inheriting? the __future__ > >> imports from the frame. > >> > >> Long explanation > >> > >> Suppose I have a fixture in which I write my assert statements. If this > >> fixture has strings on the assert statement then when the assert fails > I get > >> the wrong kind of errors. > >> > >> The problem is that the pytest assertion rewrite mechanism is calling > >> eval() on the code that contains the frame > >> > >> To better illustrate what happens, see this gist > >> > >> The tests have the following output: > >> > >> test_plugin_fixture - fail > >> test_conftest_fixture - fail > >> test_inline_fixture - pass > >> test_no_fixture - pass > >> > >> The output of test_plugin_fixture (omitted some output for brevity): > >> > >> $ pytest -k test_plugin_fixture > >> > >> msg = e.value.msg > >> > assert msg.startswith('assert >> > msg > >> E AssertionError: ValueError: ValueAndString(2, > 'lkajflaskjfalskf') > >> << Error! > >> > >> While debugging I found out that the ValueAndString constructor was > being > >> called 3 times, instead of 2. The 3rd call happened in code.py:100, > which is > >> part of the py lib library. The ?wrong? behaviour can be seen on the > >> run_pyframe_eval.py. > >> > >> I?ve tested this code on Pytest 2.6.4 with Python 2.7.8 in Windows 7 - > >> 64bits. > >> > >> > >> _______________________________________________ > >> pytest-dev mailing list > >> pytest-dev at python.org > >> https://mail.python.org/mailman/listinfo/pytest-dev > >> > > > > > > _______________________________________________ > > pytest-dev mailing list > > pytest-dev at python.org > > https://mail.python.org/mailman/listinfo/pytest-dev > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From goncalo.magno at gmail.com Sat Dec 6 19:26:51 2014 From: goncalo.magno at gmail.com (Goncalo Morgado) Date: Sat, 6 Dec 2014 18:26:51 +0000 Subject: [pytest-dev] Running a parametrized test case in multiple CPUs Message-ID: Hi pytest community, I am not sure this is the right place to ask this, and I am sorry if it's not. I am trying to make use of multiple CPUs to run my tests, which I can for separately defined tests (different test_ functions for instance). *The challenge is to run tests, generated from one single test_ function which fixture it depends on is parametrized with multiple parameters, on multiple CPUs*. Currently I have this test that generates more than a thousand tests (because of the parametrized fixture it depends on) and ends up being very time consuming to run in one single CPU. Thanks for this great tool that is pytest! Cheers, Gon?alo -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicoddemus at gmail.com Sat Dec 6 19:30:07 2014 From: nicoddemus at gmail.com (Bruno Oliveira) Date: Sat, 6 Dec 2014 16:30:07 -0200 Subject: [pytest-dev] Running a parametrized test case in multiple CPUs In-Reply-To: References: Message-ID: Have you tried using pytest-xdist? Cheers, On Sat, Dec 6, 2014 at 4:26 PM, Goncalo Morgado wrote: > > Hi pytest community, > > I am not sure this is the right place to ask this, and I am sorry if it's > not. > I am trying to make use of multiple CPUs to run my tests, which I can for > separately defined tests (different test_ functions for instance). *The > challenge is to run tests, generated from one single test_ function which > fixture it depends on is parametrized with multiple parameters, on multiple > CPUs*. > Currently I have this test that generates more than a thousand tests > (because of the parametrized fixture it depends on) and ends up being very > time consuming to run in one single CPU. > > Thanks for this great tool that is pytest! > > Cheers, > Gon?alo > > > > _______________________________________________ > pytest-dev mailing list > pytest-dev at python.org > https://mail.python.org/mailman/listinfo/pytest-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From goncalo.magno at gmail.com Sat Dec 6 19:45:36 2014 From: goncalo.magno at gmail.com (Goncalo Morgado) Date: Sat, 6 Dec 2014 18:45:36 +0000 Subject: [pytest-dev] Running a parametrized test case in multiple CPUs In-Reply-To: References: Message-ID: pytest-xdist helped me to run different tests in multiple CPUs, for instance test_foo() and test_bar() would easily be run in two CPUs with pytest-xdist. The problem is when you have test_foobar(some_fixture), where some_fixture is parametrized with, let's say two parameters ["a", "b"], which will lead to two different test runs of test_foobar(). The challenge is to have these two test runs running in two CPUs. Cheers Gon?alo On Sat, Dec 6, 2014 at 6:30 PM, Bruno Oliveira wrote: > Have you tried using pytest-xdist? > > Cheers, > > On Sat, Dec 6, 2014 at 4:26 PM, Goncalo Morgado > wrote: > >> >> Hi pytest community, >> >> I am not sure this is the right place to ask this, and I am sorry if it's >> not. >> I am trying to make use of multiple CPUs to run my tests, which I can for >> separately defined tests (different test_ functions for instance). *The >> challenge is to run tests, generated from one single test_ function which >> fixture it depends on is parametrized with multiple parameters, on multiple >> CPUs*. >> Currently I have this test that generates more than a thousand tests >> (because of the parametrized fixture it depends on) and ends up being very >> time consuming to run in one single CPU. >> >> Thanks for this great tool that is pytest! >> >> Cheers, >> Gon?alo >> >> >> >> _______________________________________________ >> pytest-dev mailing list >> pytest-dev at python.org >> https://mail.python.org/mailman/listinfo/pytest-dev >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From goncalo.magno at gmail.com Sat Dec 6 20:12:03 2014 From: goncalo.magno at gmail.com (Goncalo Morgado) Date: Sat, 6 Dec 2014 19:12:03 +0000 Subject: [pytest-dev] Running a parametrized test case in multiple CPUs In-Reply-To: References: Message-ID: To better depict what I am saying follows some code. $ sudo pip install pytest $ sudo pip install pytest-xdist $ py.test -vs test_dummy.py -n 2 #!/usr/bin/env python """ This is test_dummy.py """ import pytest import time @pytest.fixture(scope="function", params=["a1", "a2"]) def fix_dummy(request): print "fix_dummy: parameter: %s" % (request.param, ) return request.param def test_dummy1(fix_dummy): print "parameter from fix_dummy: %s" %(fix_dummy, ) time.sleep(2) Cheers, Gon?alo On Sat, Dec 6, 2014 at 6:45 PM, Goncalo Morgado wrote: > > pytest-xdist helped me to run different tests in multiple CPUs, for > instance test_foo() and test_bar() would easily be run in two CPUs with > pytest-xdist. The problem is when you have test_foobar(some_fixture), where > some_fixture is parametrized with, let's say two parameters ["a", "b"], > which will lead to two different test runs of test_foobar(). The challenge > is to have these two test runs running in two CPUs. > > Cheers > Gon?alo > > > On Sat, Dec 6, 2014 at 6:30 PM, Bruno Oliveira > wrote: > >> Have you tried using pytest-xdist? >> >> Cheers, >> >> On Sat, Dec 6, 2014 at 4:26 PM, Goncalo Morgado >> wrote: >> >>> >>> Hi pytest community, >>> >>> I am not sure this is the right place to ask this, and I am sorry if >>> it's not. >>> I am trying to make use of multiple CPUs to run my tests, which I can >>> for separately defined tests (different test_ functions for instance). *The >>> challenge is to run tests, generated from one single test_ function which >>> fixture it depends on is parametrized with multiple parameters, on multiple >>> CPUs*. >>> Currently I have this test that generates more than a thousand tests >>> (because of the parametrized fixture it depends on) and ends up being very >>> time consuming to run in one single CPU. >>> >>> Thanks for this great tool that is pytest! >>> >>> Cheers, >>> Gon?alo >>> >>> >>> >>> _______________________________________________ >>> pytest-dev mailing list >>> pytest-dev at python.org >>> https://mail.python.org/mailman/listinfo/pytest-dev >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicoddemus at gmail.com Sat Dec 6 21:52:42 2014 From: nicoddemus at gmail.com (Bruno Oliveira) Date: Sat, 6 Dec 2014 18:52:42 -0200 Subject: [pytest-dev] Running a parametrized test case in multiple CPUs In-Reply-To: References: Message-ID: Thanks for the example. I tried and it seems that xdist is running the tests in parallel: the more CPUs I use, the fastest the test suite runs. Do you see a different behavior? Cheers, On Sat, Dec 6, 2014 at 5:12 PM, Goncalo Morgado wrote: > To better depict what I am saying follows some code. > > $ sudo pip install pytest > $ sudo pip install pytest-xdist > $ py.test -vs test_dummy.py -n 2 > > > #!/usr/bin/env python > > """ > This is test_dummy.py > """ > import pytest > import time > > @pytest.fixture(scope="function", params=["a1", "a2"]) > def fix_dummy(request): > print "fix_dummy: parameter: %s" % (request.param, ) > return request.param > > def test_dummy1(fix_dummy): > print "parameter from fix_dummy: %s" %(fix_dummy, ) > time.sleep(2) > > > Cheers, > Gon?alo > > > > On Sat, Dec 6, 2014 at 6:45 PM, Goncalo Morgado > wrote: > >> >> pytest-xdist helped me to run different tests in multiple CPUs, for >> instance test_foo() and test_bar() would easily be run in two CPUs with >> pytest-xdist. The problem is when you have test_foobar(some_fixture), where >> some_fixture is parametrized with, let's say two parameters ["a", "b"], >> which will lead to two different test runs of test_foobar(). The challenge >> is to have these two test runs running in two CPUs. >> >> Cheers >> Gon?alo >> >> >> On Sat, Dec 6, 2014 at 6:30 PM, Bruno Oliveira >> wrote: >> >>> Have you tried using pytest-xdist? >>> >>> Cheers, >>> >>> On Sat, Dec 6, 2014 at 4:26 PM, Goncalo Morgado >> > wrote: >>> >>>> >>>> Hi pytest community, >>>> >>>> I am not sure this is the right place to ask this, and I am sorry if >>>> it's not. >>>> I am trying to make use of multiple CPUs to run my tests, which I can >>>> for separately defined tests (different test_ functions for instance). *The >>>> challenge is to run tests, generated from one single test_ function which >>>> fixture it depends on is parametrized with multiple parameters, on multiple >>>> CPUs*. >>>> Currently I have this test that generates more than a thousand tests >>>> (because of the parametrized fixture it depends on) and ends up being very >>>> time consuming to run in one single CPU. >>>> >>>> Thanks for this great tool that is pytest! >>>> >>>> Cheers, >>>> Gon?alo >>>> >>>> >>>> >>>> _______________________________________________ >>>> pytest-dev mailing list >>>> pytest-dev at python.org >>>> https://mail.python.org/mailman/listinfo/pytest-dev >>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From goncalo.magno at gmail.com Sat Dec 6 22:29:53 2014 From: goncalo.magno at gmail.com (Goncalo Morgado) Date: Sat, 6 Dec 2014 21:29:53 +0000 Subject: [pytest-dev] Running a parametrized test case in multiple CPUs In-Reply-To: References: Message-ID: I might be missing something very fundamental here, but tests are not running in parallel on my side, as you may see tests are taking 4 seconds-ish: $ py.test -vs test_dummy.py -n 2 ====================================================================================== test session starts ======================================================================================= platform linux2 -- Python 2.7.3 -- py-1.4.26 -- pytest-2.6.4 -- /usr/bin/python plugins: xdist [gw0] linux2 Python 2.7.3 cwd: /home/gmagno/Desktop/pytest [gw1] linux2 Python 2.7.3 cwd: /home/gmagno/Desktop/pytest [gw0] Python 2.7.3 (default, Feb 27 2014, 20:00:17) -- [GCC 4.6.3] [gw1] Python 2.7.3 (default, Feb 27 2014, 20:00:17) -- [GCC 4.6.3] gw0 [2] / gw1 [2] scheduling tests via LoadScheduling test_dummy.py::test_dummy1[a1] [gw1] PASSED test_dummy.py::test_dummy1[a1] test_dummy.py::test_dummy1[a2] [gw1] PASSED test_dummy.py::test_dummy1[a2] ==================================================================================== 2 passed in 4.47 seconds ==================================================================================== Cheers, Gon?alo On Sat, Dec 6, 2014 at 8:52 PM, Bruno Oliveira wrote: > Thanks for the example. > > I tried and it seems that xdist is running the tests in parallel: the more > CPUs I use, the fastest the test suite runs. Do you see a different > behavior? > > Cheers, > > On Sat, Dec 6, 2014 at 5:12 PM, Goncalo Morgado > wrote: > >> To better depict what I am saying follows some code. >> >> $ sudo pip install pytest >> $ sudo pip install pytest-xdist >> $ py.test -vs test_dummy.py -n 2 >> >> >> #!/usr/bin/env python >> >> """ >> This is test_dummy.py >> """ >> import pytest >> import time >> >> @pytest.fixture(scope="function", params=["a1", "a2"]) >> def fix_dummy(request): >> print "fix_dummy: parameter: %s" % (request.param, ) >> return request.param >> >> def test_dummy1(fix_dummy): >> print "parameter from fix_dummy: %s" %(fix_dummy, ) >> time.sleep(2) >> >> >> Cheers, >> Gon?alo >> >> >> >> On Sat, Dec 6, 2014 at 6:45 PM, Goncalo Morgado >> wrote: >> >>> >>> pytest-xdist helped me to run different tests in multiple CPUs, for >>> instance test_foo() and test_bar() would easily be run in two CPUs with >>> pytest-xdist. The problem is when you have test_foobar(some_fixture), where >>> some_fixture is parametrized with, let's say two parameters ["a", "b"], >>> which will lead to two different test runs of test_foobar(). The challenge >>> is to have these two test runs running in two CPUs. >>> >>> Cheers >>> Gon?alo >>> >>> >>> On Sat, Dec 6, 2014 at 6:30 PM, Bruno Oliveira >>> wrote: >>> >>>> Have you tried using pytest-xdist? >>>> >>>> Cheers, >>>> >>>> On Sat, Dec 6, 2014 at 4:26 PM, Goncalo Morgado < >>>> goncalo.magno at gmail.com> wrote: >>>> >>>>> >>>>> Hi pytest community, >>>>> >>>>> I am not sure this is the right place to ask this, and I am sorry if >>>>> it's not. >>>>> I am trying to make use of multiple CPUs to run my tests, which I can >>>>> for separately defined tests (different test_ functions for instance). *The >>>>> challenge is to run tests, generated from one single test_ function which >>>>> fixture it depends on is parametrized with multiple parameters, on multiple >>>>> CPUs*. >>>>> Currently I have this test that generates more than a thousand tests >>>>> (because of the parametrized fixture it depends on) and ends up being very >>>>> time consuming to run in one single CPU. >>>>> >>>>> Thanks for this great tool that is pytest! >>>>> >>>>> Cheers, >>>>> Gon?alo >>>>> >>>>> >>>>> >>>>> _______________________________________________ >>>>> pytest-dev mailing list >>>>> pytest-dev at python.org >>>>> https://mail.python.org/mailman/listinfo/pytest-dev >>>>> >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From goncalo.magno at gmail.com Sat Dec 6 22:38:30 2014 From: goncalo.magno at gmail.com (Goncalo Morgado) Date: Sat, 6 Dec 2014 21:38:30 +0000 Subject: [pytest-dev] Running a parametrized test case in multiple CPUs In-Reply-To: References: Message-ID: On the other hand, if you add another test_ function there will be no changes in total duration because pytest is actually running both tests in parallel: #!/usr/bin/env python import pytest import time @pytest.fixture(scope="function", params=["a1", "a2"]) def fix_dummy(request): print "fix_dummy: parameter: %s" % (request.param, ) return request.param def test_dummy1(fix_dummy): print "parameter from fix_dummy: %s" %(fix_dummy, ) time.sleep(2) def test_dummy2(fix_dummy): print "parameter from fix_dummy: %s" %(fix_dummy, ) time.sleep(2) $ py.test -vs test_dummy.py -n 2 ====================================================================================== test session starts ======================================================================================= platform linux2 -- Python 2.7.3 -- py-1.4.26 -- pytest-2.6.4 -- /usr/bin/python plugins: xdist [gw0] linux2 Python 2.7.3 cwd: /home/gmagno/Desktop/pytest [gw1] linux2 Python 2.7.3 cwd: /home/gmagno/Desktop/pytest [gw1] Python 2.7.3 (default, Feb 27 2014, 20:00:17) -- [GCC 4.6.3] [gw0] Python 2.7.3 (default, Feb 27 2014, 20:00:17) -- [GCC 4.6.3] gw0 [4] / gw1 [4] scheduling tests via LoadScheduling test_dummy.py::test_dummy2[a1] test_dummy.py::test_dummy1[a1] [gw1] PASSED test_dummy.py::test_dummy1[a1] [gw0] PASSED test_dummy.py::test_dummy2[a1] test_dummy.py::test_dummy1[a2] test_dummy.py::test_dummy2[a2] [gw1] PASSED test_dummy.py::test_dummy1[a2] [gw0] PASSED test_dummy.py::test_dummy2[a2] ==================================================================================== 4 passed in 4.66 seconds ==================================================================================== I hope I am not confusing you guys. Cheers, Gon?alo On Sat, Dec 6, 2014 at 9:29 PM, Goncalo Morgado wrote: > > > I might be missing something very fundamental here, but tests are not > running in parallel on my side, as you may see tests are taking 4 > seconds-ish: > > $ py.test -vs test_dummy.py -n 2 > ====================================================================================== > test session starts > ======================================================================================= > platform linux2 -- Python 2.7.3 -- py-1.4.26 -- pytest-2.6.4 -- > /usr/bin/python > plugins: xdist > [gw0] linux2 Python 2.7.3 cwd: /home/gmagno/Desktop/pytest > [gw1] linux2 Python 2.7.3 cwd: /home/gmagno/Desktop/pytest > [gw0] Python 2.7.3 (default, Feb 27 2014, 20:00:17) -- [GCC 4.6.3] > [gw1] Python 2.7.3 (default, Feb 27 2014, 20:00:17) -- [GCC 4.6.3] > gw0 [2] / gw1 [2] > scheduling tests via LoadScheduling > > test_dummy.py::test_dummy1[a1] > [gw1] PASSED test_dummy.py::test_dummy1[a1] > test_dummy.py::test_dummy1[a2] > [gw1] PASSED test_dummy.py::test_dummy1[a2] > > ==================================================================================== > 2 passed in 4.47 seconds > ==================================================================================== > > Cheers, > Gon?alo > > > > On Sat, Dec 6, 2014 at 8:52 PM, Bruno Oliveira > wrote: > >> Thanks for the example. >> >> I tried and it seems that xdist is running the tests in parallel: the >> more CPUs I use, the fastest the test suite runs. Do you see a different >> behavior? >> >> Cheers, >> >> On Sat, Dec 6, 2014 at 5:12 PM, Goncalo Morgado >> wrote: >> >>> To better depict what I am saying follows some code. >>> >>> $ sudo pip install pytest >>> $ sudo pip install pytest-xdist >>> $ py.test -vs test_dummy.py -n 2 >>> >>> >>> #!/usr/bin/env python >>> >>> """ >>> This is test_dummy.py >>> """ >>> import pytest >>> import time >>> >>> @pytest.fixture(scope="function", params=["a1", "a2"]) >>> def fix_dummy(request): >>> print "fix_dummy: parameter: %s" % (request.param, ) >>> return request.param >>> >>> def test_dummy1(fix_dummy): >>> print "parameter from fix_dummy: %s" %(fix_dummy, ) >>> time.sleep(2) >>> >>> >>> Cheers, >>> Gon?alo >>> >>> >>> >>> On Sat, Dec 6, 2014 at 6:45 PM, Goncalo Morgado >> > wrote: >>> >>>> >>>> pytest-xdist helped me to run different tests in multiple CPUs, for >>>> instance test_foo() and test_bar() would easily be run in two CPUs with >>>> pytest-xdist. The problem is when you have test_foobar(some_fixture), where >>>> some_fixture is parametrized with, let's say two parameters ["a", "b"], >>>> which will lead to two different test runs of test_foobar(). The challenge >>>> is to have these two test runs running in two CPUs. >>>> >>>> Cheers >>>> Gon?alo >>>> >>>> >>>> On Sat, Dec 6, 2014 at 6:30 PM, Bruno Oliveira >>>> wrote: >>>> >>>>> Have you tried using pytest-xdist? >>>>> >>>>> Cheers, >>>>> >>>>> On Sat, Dec 6, 2014 at 4:26 PM, Goncalo Morgado < >>>>> goncalo.magno at gmail.com> wrote: >>>>> >>>>>> >>>>>> Hi pytest community, >>>>>> >>>>>> I am not sure this is the right place to ask this, and I am sorry if >>>>>> it's not. >>>>>> I am trying to make use of multiple CPUs to run my tests, which I can >>>>>> for separately defined tests (different test_ functions for instance). *The >>>>>> challenge is to run tests, generated from one single test_ function which >>>>>> fixture it depends on is parametrized with multiple parameters, on multiple >>>>>> CPUs*. >>>>>> Currently I have this test that generates more than a thousand tests >>>>>> (because of the parametrized fixture it depends on) and ends up being very >>>>>> time consuming to run in one single CPU. >>>>>> >>>>>> Thanks for this great tool that is pytest! >>>>>> >>>>>> Cheers, >>>>>> Gon?alo >>>>>> >>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> pytest-dev mailing list >>>>>> pytest-dev at python.org >>>>>> https://mail.python.org/mailman/listinfo/pytest-dev >>>>>> >>>>>> >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From edisongustavo at gmail.com Mon Dec 8 13:26:18 2014 From: edisongustavo at gmail.com (Edison Gustavo Muenz) Date: Mon, 8 Dec 2014 10:26:18 -0200 Subject: [pytest-dev] Pytest fails with the wrong error when unicode_literals is present In-Reply-To: References: Message-ID: Floris, you are correct. Those are the 2 issues. As for the issue number *1* I was thinking that this could be solved in the *pylib* library. As for the solution I don?t know if the imports should be parsed out from the fullsource attribute as that might be slow. If that?s the solution I could create a pull-request. Regards, Gustavo ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicoddemus at gmail.com Mon Dec 8 18:07:06 2014 From: nicoddemus at gmail.com (Bruno Oliveira) Date: Mon, 8 Dec 2014 15:07:06 -0200 Subject: [pytest-dev] Running a parametrized test case in multiple CPUs In-Reply-To: References: Message-ID: Hi Gon?alo, pytest-xdist has its own overhead (starting up slaves, collecting, etc), plus it might not be very efficient for few tests. If you increase the number of tests (use params=range(20) for example), you will notice that it runs faster than running without xdist: import pytestimport time @pytest.fixture(scope="function", params=range(20))def fix_dummy(request): return request.param def test_dummy1(fix_dummy): time.sleep(2) Running py.test -n4 test_bar.py: ============================= test session starts ============================= platform win32 -- Python 2.7.7 -- py-1.4.26 -- pytest-2.6.4 plugins: timeout, localserver, xdist, cov, mock gw0 [20] / gw1 [20] / gw2 [20] / gw3 [20] scheduling tests via LoadScheduling .................... ========================= 20 passed in 11.12 seconds ========================== Running py.test test_bar.py: ============================= test session starts ============================= platform win32 -- Python 2.7.7 -- py-1.4.26 -- pytest-2.6.4 plugins: timeout, localserver, xdist, cov, mock collected 20 items test_bar.py .................... ========================= 20 passed in 40.06 seconds ========================== As you can see, it is running in parallel for the first case. :) ? On Sat, Dec 6, 2014 at 7:38 PM, Goncalo Morgado wrote: > > On the other hand, if you add another test_ function there will be no > changes in total duration because pytest is actually running both tests in > parallel: > > #!/usr/bin/env python > > import pytest > import time > > @pytest.fixture(scope="function", params=["a1", "a2"]) > def fix_dummy(request): > print "fix_dummy: parameter: %s" % (request.param, ) > return request.param > > def test_dummy1(fix_dummy): > print "parameter from fix_dummy: %s" %(fix_dummy, ) > time.sleep(2) > > def test_dummy2(fix_dummy): > print "parameter from fix_dummy: %s" %(fix_dummy, ) > time.sleep(2) > > > > > $ py.test -vs test_dummy.py -n 2 > ====================================================================================== > test session starts > ======================================================================================= > platform linux2 -- Python 2.7.3 -- py-1.4.26 -- pytest-2.6.4 -- > /usr/bin/python > plugins: xdist > [gw0] linux2 Python 2.7.3 cwd: /home/gmagno/Desktop/pytest > [gw1] linux2 Python 2.7.3 cwd: /home/gmagno/Desktop/pytest > [gw1] Python 2.7.3 (default, Feb 27 2014, 20:00:17) -- [GCC 4.6.3] > [gw0] Python 2.7.3 (default, Feb 27 2014, 20:00:17) -- [GCC 4.6.3] > gw0 [4] / gw1 [4] > scheduling tests via LoadScheduling > > test_dummy.py::test_dummy2[a1] > test_dummy.py::test_dummy1[a1] > [gw1] PASSED test_dummy.py::test_dummy1[a1] > [gw0] PASSED test_dummy.py::test_dummy2[a1] > test_dummy.py::test_dummy1[a2] > test_dummy.py::test_dummy2[a2] > [gw1] PASSED test_dummy.py::test_dummy1[a2] > [gw0] PASSED test_dummy.py::test_dummy2[a2] > > ==================================================================================== > 4 passed in 4.66 seconds > ==================================================================================== > > > I hope I am not confusing you guys. > > Cheers, > Gon?alo > > > > > > > > On Sat, Dec 6, 2014 at 9:29 PM, Goncalo Morgado > wrote: > >> >> >> I might be missing something very fundamental here, but tests are not >> running in parallel on my side, as you may see tests are taking 4 >> seconds-ish: >> >> $ py.test -vs test_dummy.py -n 2 >> ====================================================================================== >> test session starts >> ======================================================================================= >> platform linux2 -- Python 2.7.3 -- py-1.4.26 -- pytest-2.6.4 -- >> /usr/bin/python >> plugins: xdist >> [gw0] linux2 Python 2.7.3 cwd: /home/gmagno/Desktop/pytest >> [gw1] linux2 Python 2.7.3 cwd: /home/gmagno/Desktop/pytest >> [gw0] Python 2.7.3 (default, Feb 27 2014, 20:00:17) -- [GCC 4.6.3] >> [gw1] Python 2.7.3 (default, Feb 27 2014, 20:00:17) -- [GCC 4.6.3] >> gw0 [2] / gw1 [2] >> scheduling tests via LoadScheduling >> >> test_dummy.py::test_dummy1[a1] >> [gw1] PASSED test_dummy.py::test_dummy1[a1] >> test_dummy.py::test_dummy1[a2] >> [gw1] PASSED test_dummy.py::test_dummy1[a2] >> >> ==================================================================================== >> 2 passed in 4.47 seconds >> ==================================================================================== >> >> Cheers, >> Gon?alo >> >> >> >> On Sat, Dec 6, 2014 at 8:52 PM, Bruno Oliveira >> wrote: >> >>> Thanks for the example. >>> >>> I tried and it seems that xdist is running the tests in parallel: the >>> more CPUs I use, the fastest the test suite runs. Do you see a different >>> behavior? >>> >>> Cheers, >>> >>> On Sat, Dec 6, 2014 at 5:12 PM, Goncalo Morgado >> > wrote: >>> >>>> To better depict what I am saying follows some code. >>>> >>>> $ sudo pip install pytest >>>> $ sudo pip install pytest-xdist >>>> $ py.test -vs test_dummy.py -n 2 >>>> >>>> >>>> #!/usr/bin/env python >>>> >>>> """ >>>> This is test_dummy.py >>>> """ >>>> import pytest >>>> import time >>>> >>>> @pytest.fixture(scope="function", params=["a1", "a2"]) >>>> def fix_dummy(request): >>>> print "fix_dummy: parameter: %s" % (request.param, ) >>>> return request.param >>>> >>>> def test_dummy1(fix_dummy): >>>> print "parameter from fix_dummy: %s" %(fix_dummy, ) >>>> time.sleep(2) >>>> >>>> >>>> Cheers, >>>> Gon?alo >>>> >>>> >>>> >>>> On Sat, Dec 6, 2014 at 6:45 PM, Goncalo Morgado < >>>> goncalo.magno at gmail.com> wrote: >>>> >>>>> >>>>> pytest-xdist helped me to run different tests in multiple CPUs, for >>>>> instance test_foo() and test_bar() would easily be run in two CPUs with >>>>> pytest-xdist. The problem is when you have test_foobar(some_fixture), where >>>>> some_fixture is parametrized with, let's say two parameters ["a", "b"], >>>>> which will lead to two different test runs of test_foobar(). The challenge >>>>> is to have these two test runs running in two CPUs. >>>>> >>>>> Cheers >>>>> Gon?alo >>>>> >>>>> >>>>> On Sat, Dec 6, 2014 at 6:30 PM, Bruno Oliveira >>>>> wrote: >>>>> >>>>>> Have you tried using pytest-xdist? >>>>>> >>>>>> Cheers, >>>>>> >>>>>> On Sat, Dec 6, 2014 at 4:26 PM, Goncalo Morgado < >>>>>> goncalo.magno at gmail.com> wrote: >>>>>> >>>>>>> >>>>>>> Hi pytest community, >>>>>>> >>>>>>> I am not sure this is the right place to ask this, and I am sorry if >>>>>>> it's not. >>>>>>> I am trying to make use of multiple CPUs to run my tests, which I >>>>>>> can for separately defined tests (different test_ functions for instance). *The >>>>>>> challenge is to run tests, generated from one single test_ function which >>>>>>> fixture it depends on is parametrized with multiple parameters, on multiple >>>>>>> CPUs*. >>>>>>> Currently I have this test that generates more than a thousand tests >>>>>>> (because of the parametrized fixture it depends on) and ends up being very >>>>>>> time consuming to run in one single CPU. >>>>>>> >>>>>>> Thanks for this great tool that is pytest! >>>>>>> >>>>>>> Cheers, >>>>>>> Gon?alo >>>>>>> >>>>>>> >>>>>>> >>>>>>> _______________________________________________ >>>>>>> pytest-dev mailing list >>>>>>> pytest-dev at python.org >>>>>>> https://mail.python.org/mailman/listinfo/pytest-dev >>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From goncalo.magno at gmail.com Mon Dec 8 22:19:45 2014 From: goncalo.magno at gmail.com (Goncalo Morgado) Date: Mon, 8 Dec 2014 21:19:45 +0000 Subject: [pytest-dev] Running a parametrized test case in multiple CPUs In-Reply-To: References: Message-ID: Ok, I see. That's great news, thanks for the info! Cheers, Goncalo On Mon, Dec 8, 2014 at 5:07 PM, Bruno Oliveira wrote: > Hi Gon?alo, > > pytest-xdist has its own overhead (starting up slaves, collecting, etc), > plus it might not be very efficient for few tests. If you increase the > number of tests (use params=range(20) for example), you will notice that > it runs faster than running without xdist: > > import pytestimport time > @pytest.fixture(scope="function", params=range(20))def fix_dummy(request): > return request.param > def test_dummy1(fix_dummy): > time.sleep(2) > > Running py.test -n4 test_bar.py: > > ============================= test session starts ============================= > platform win32 -- Python 2.7.7 -- py-1.4.26 -- pytest-2.6.4 > plugins: timeout, localserver, xdist, cov, mock > gw0 [20] / gw1 [20] / gw2 [20] / gw3 [20] > scheduling tests via LoadScheduling > .................... > ========================= 20 passed in 11.12 seconds ========================== > > Running py.test test_bar.py: > > ============================= test session starts ============================= > platform win32 -- Python 2.7.7 -- py-1.4.26 -- pytest-2.6.4 > plugins: timeout, localserver, xdist, cov, mock > collected 20 items > > test_bar.py .................... > > ========================= 20 passed in 40.06 seconds ========================== > > As you can see, it is running in parallel for the first case. :) > ? > > On Sat, Dec 6, 2014 at 7:38 PM, Goncalo Morgado > wrote: > >> >> On the other hand, if you add another test_ function there will be no >> changes in total duration because pytest is actually running both tests in >> parallel: >> >> #!/usr/bin/env python >> >> import pytest >> import time >> >> @pytest.fixture(scope="function", params=["a1", "a2"]) >> def fix_dummy(request): >> print "fix_dummy: parameter: %s" % (request.param, ) >> return request.param >> >> def test_dummy1(fix_dummy): >> print "parameter from fix_dummy: %s" %(fix_dummy, ) >> time.sleep(2) >> >> def test_dummy2(fix_dummy): >> print "parameter from fix_dummy: %s" %(fix_dummy, ) >> time.sleep(2) >> >> >> >> >> $ py.test -vs test_dummy.py -n 2 >> ====================================================================================== >> test session starts >> ======================================================================================= >> platform linux2 -- Python 2.7.3 -- py-1.4.26 -- pytest-2.6.4 -- >> /usr/bin/python >> plugins: xdist >> [gw0] linux2 Python 2.7.3 cwd: /home/gmagno/Desktop/pytest >> [gw1] linux2 Python 2.7.3 cwd: /home/gmagno/Desktop/pytest >> [gw1] Python 2.7.3 (default, Feb 27 2014, 20:00:17) -- [GCC 4.6.3] >> [gw0] Python 2.7.3 (default, Feb 27 2014, 20:00:17) -- [GCC 4.6.3] >> gw0 [4] / gw1 [4] >> scheduling tests via LoadScheduling >> >> test_dummy.py::test_dummy2[a1] >> test_dummy.py::test_dummy1[a1] >> [gw1] PASSED test_dummy.py::test_dummy1[a1] >> [gw0] PASSED test_dummy.py::test_dummy2[a1] >> test_dummy.py::test_dummy1[a2] >> test_dummy.py::test_dummy2[a2] >> [gw1] PASSED test_dummy.py::test_dummy1[a2] >> [gw0] PASSED test_dummy.py::test_dummy2[a2] >> >> ==================================================================================== >> 4 passed in 4.66 seconds >> ==================================================================================== >> >> >> I hope I am not confusing you guys. >> >> Cheers, >> Gon?alo >> >> >> >> >> >> >> >> On Sat, Dec 6, 2014 at 9:29 PM, Goncalo Morgado >> wrote: >> >>> >>> >>> I might be missing something very fundamental here, but tests are not >>> running in parallel on my side, as you may see tests are taking 4 >>> seconds-ish: >>> >>> $ py.test -vs test_dummy.py -n 2 >>> ====================================================================================== >>> test session starts >>> ======================================================================================= >>> platform linux2 -- Python 2.7.3 -- py-1.4.26 -- pytest-2.6.4 -- >>> /usr/bin/python >>> plugins: xdist >>> [gw0] linux2 Python 2.7.3 cwd: /home/gmagno/Desktop/pytest >>> [gw1] linux2 Python 2.7.3 cwd: /home/gmagno/Desktop/pytest >>> [gw0] Python 2.7.3 (default, Feb 27 2014, 20:00:17) -- [GCC 4.6.3] >>> [gw1] Python 2.7.3 (default, Feb 27 2014, 20:00:17) -- [GCC 4.6.3] >>> gw0 [2] / gw1 [2] >>> scheduling tests via LoadScheduling >>> >>> test_dummy.py::test_dummy1[a1] >>> [gw1] PASSED test_dummy.py::test_dummy1[a1] >>> test_dummy.py::test_dummy1[a2] >>> [gw1] PASSED test_dummy.py::test_dummy1[a2] >>> >>> ==================================================================================== >>> 2 passed in 4.47 seconds >>> ==================================================================================== >>> >>> Cheers, >>> Gon?alo >>> >>> >>> >>> On Sat, Dec 6, 2014 at 8:52 PM, Bruno Oliveira >>> wrote: >>> >>>> Thanks for the example. >>>> >>>> I tried and it seems that xdist is running the tests in parallel: the >>>> more CPUs I use, the fastest the test suite runs. Do you see a different >>>> behavior? >>>> >>>> Cheers, >>>> >>>> On Sat, Dec 6, 2014 at 5:12 PM, Goncalo Morgado < >>>> goncalo.magno at gmail.com> wrote: >>>> >>>>> To better depict what I am saying follows some code. >>>>> >>>>> $ sudo pip install pytest >>>>> $ sudo pip install pytest-xdist >>>>> $ py.test -vs test_dummy.py -n 2 >>>>> >>>>> >>>>> #!/usr/bin/env python >>>>> >>>>> """ >>>>> This is test_dummy.py >>>>> """ >>>>> import pytest >>>>> import time >>>>> >>>>> @pytest.fixture(scope="function", params=["a1", "a2"]) >>>>> def fix_dummy(request): >>>>> print "fix_dummy: parameter: %s" % (request.param, ) >>>>> return request.param >>>>> >>>>> def test_dummy1(fix_dummy): >>>>> print "parameter from fix_dummy: %s" %(fix_dummy, ) >>>>> time.sleep(2) >>>>> >>>>> >>>>> Cheers, >>>>> Gon?alo >>>>> >>>>> >>>>> >>>>> On Sat, Dec 6, 2014 at 6:45 PM, Goncalo Morgado < >>>>> goncalo.magno at gmail.com> wrote: >>>>> >>>>>> >>>>>> pytest-xdist helped me to run different tests in multiple CPUs, for >>>>>> instance test_foo() and test_bar() would easily be run in two CPUs with >>>>>> pytest-xdist. The problem is when you have test_foobar(some_fixture), where >>>>>> some_fixture is parametrized with, let's say two parameters ["a", "b"], >>>>>> which will lead to two different test runs of test_foobar(). The challenge >>>>>> is to have these two test runs running in two CPUs. >>>>>> >>>>>> Cheers >>>>>> Gon?alo >>>>>> >>>>>> >>>>>> On Sat, Dec 6, 2014 at 6:30 PM, Bruno Oliveira >>>>>> wrote: >>>>>> >>>>>>> Have you tried using pytest-xdist? >>>>>>> >>>>>>> Cheers, >>>>>>> >>>>>>> On Sat, Dec 6, 2014 at 4:26 PM, Goncalo Morgado < >>>>>>> goncalo.magno at gmail.com> wrote: >>>>>>> >>>>>>>> >>>>>>>> Hi pytest community, >>>>>>>> >>>>>>>> I am not sure this is the right place to ask this, and I am sorry >>>>>>>> if it's not. >>>>>>>> I am trying to make use of multiple CPUs to run my tests, which I >>>>>>>> can for separately defined tests (different test_ functions for instance). *The >>>>>>>> challenge is to run tests, generated from one single test_ function which >>>>>>>> fixture it depends on is parametrized with multiple parameters, on multiple >>>>>>>> CPUs*. >>>>>>>> Currently I have this test that generates more than a thousand >>>>>>>> tests (because of the parametrized fixture it depends on) and ends up being >>>>>>>> very time consuming to run in one single CPU. >>>>>>>> >>>>>>>> Thanks for this great tool that is pytest! >>>>>>>> >>>>>>>> Cheers, >>>>>>>> Gon?alo >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> pytest-dev mailing list >>>>>>>> pytest-dev at python.org >>>>>>>> https://mail.python.org/mailman/listinfo/pytest-dev >>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tibor.arpas at infinit.sk Fri Dec 12 15:21:13 2014 From: tibor.arpas at infinit.sk (Tibor Arpas) Date: Fri, 12 Dec 2014 15:21:13 +0100 Subject: [pytest-dev] Crowdfunding a pytest plugin Message-ID: Hello pytesters, We are putting together a small Indiegogo campaign for a pytest plugin which should significantly improve the experience of executing a long test suite. We'd be glad to hear any feedback before launching it. The draft is here: https://www.indiegogo.com/project/preview/4fe07d66 Also, is anybody interested in helping with promotion and/or development? Cheers, Tibor From mail at florian-schulze.net Fri Dec 12 16:17:36 2014 From: mail at florian-schulze.net (Florian Schulze) Date: Fri, 12 Dec 2014 16:17:36 +0100 Subject: [pytest-dev] Crowdfunding a pytest plugin In-Reply-To: References: Message-ID: <46C88FB1-F0FA-4AD3-B2E2-5285D438DABB@florian-schulze.net> > We are putting together a small Indiegogo campaign for a pytest plugin > which should significantly improve the experience of executing a long > test suite. We'd be glad to hear any feedback before launching it. The > draft is here: https://www.indiegogo.com/project/preview/4fe07d66 This certainly looks interesting. The "last failures" part is implemented in pytest-cache, are you aware of that? The reordering by speed sounds nice. Be aware, that fixture setup/teardown may be influenced by reordering and thus change the overall test execution time. I discovered this while extending pytest-random. I also thought about using something like coverage to only run affected tests. How much testing did you do in your prototype? Do you keep track of (pip/setuptools) installed packages (versions)? Did you try changes at various levels like new files, modules, classes, metaclasses, mocks etc? Python is very dynamic as you most likely know :) Regards, Florian Schulze From msabramo at gmail.com Fri Dec 12 16:36:43 2014 From: msabramo at gmail.com (Marc Abramowitz) Date: Fri, 12 Dec 2014 07:36:43 -0800 Subject: [pytest-dev] Crowdfunding a pytest plugin In-Reply-To: <46C88FB1-F0FA-4AD3-B2E2-5285D438DABB@florian-schulze.net> References: <46C88FB1-F0FA-4AD3-B2E2-5285D438DABB@florian-schulze.net> Message-ID: Yes this could be interesting and I'd consider asking my company to donate if it has clear benefits over what's already out there. As Florian mentioned, pytest-cache lets you rerun failed tests. And pytest-sugar gives you nice progress and instant failures as they happen. pytest-xdist helps parallelize and speed up slow tests. I wonder if pytest-cache should be in pytest core? Personally I always forget to install and use it. (Which maybe says more about me than about pytest or pytest-cache) But _maybe_ if it was in core, it would be more likely to be discovered and used? Sorry, kind of a tangent. Perhaps a demo would make it clearer what you're envisioning? -Marc http://marc-abramowitz.com Sent from my iPhone 4S On Dec 12, 2014, at 7:17 AM, "Florian Schulze" wrote: >> We are putting together a small Indiegogo campaign for a pytest plugin >> which should significantly improve the experience of executing a long >> test suite. We'd be glad to hear any feedback before launching it. The >> draft is here: https://www.indiegogo.com/project/preview/4fe07d66 > > This certainly looks interesting. > > The "last failures" part is implemented in pytest-cache, are you aware of that? > > The reordering by speed sounds nice. Be aware, that fixture setup/teardown may be influenced by reordering and thus change the overall test execution time. I discovered this while extending pytest-random. > > I also thought about using something like coverage to only run affected tests. How much testing did you do in your prototype? Do you keep track of (pip/setuptools) installed packages (versions)? Did you try changes at various levels like new files, modules, classes, metaclasses, mocks etc? Python is very dynamic as you most likely know :) > > Regards, > Florian Schulze > _______________________________________________ > pytest-dev mailing list > pytest-dev at python.org > https://mail.python.org/mailman/listinfo/pytest-dev From tibor.arpas at infinit.sk Sat Dec 13 18:56:44 2014 From: tibor.arpas at infinit.sk (Tibor Arpas) Date: Sat, 13 Dec 2014 18:56:44 +0100 Subject: [pytest-dev] Crowdfunding a pytest plugin In-Reply-To: <46C88FB1-F0FA-4AD3-B2E2-5285D438DABB@florian-schulze.net> References: <46C88FB1-F0FA-4AD3-B2E2-5285D438DABB@florian-schulze.net> Message-ID: Hello Marc and Florian, I realized earlier that the "last failures" part is already implemented in pytest-cache. Also according to Holger it will move into core for pytest-2.7. Of course we'll use that, not implement it independently. I'll rewrite the campaign text so that the feature is promoted as part of the whole solution but make it clear it's implemented in pytest-cache (or rather core). As for amount of testing in the prototype it was pretty minimal. In this regard it's quite risky project of course, and big compromises might be necessary in order to run substantial projects. I'll make it clear in the description. (and inform about any more results from testing) Thanks for feedback! Tibor On Fri, Dec 12, 2014 at 4:17 PM, Florian Schulze wrote: >> We are putting together a small Indiegogo campaign for a pytest plugin >> which should significantly improve the experience of executing a long >> test suite. We'd be glad to hear any feedback before launching it. The >> draft is here: https://www.indiegogo.com/project/preview/4fe07d66 > > > This certainly looks interesting. > > The "last failures" part is implemented in pytest-cache, are you aware of > that? > > The reordering by speed sounds nice. Be aware, that fixture setup/teardown > may be influenced by reordering and thus change the overall test execution > time. I discovered this while extending pytest-random. > > I also thought about using something like coverage to only run affected > tests. How much testing did you do in your prototype? Do you keep track of > (pip/setuptools) installed packages (versions)? Did you try changes at > various levels like new files, modules, classes, metaclasses, mocks etc? > Python is very dynamic as you most likely know :) > > Regards, > Florian Schulze From Deil.Christoph at gmail.com Thu Dec 25 23:02:26 2014 From: Deil.Christoph at gmail.com (Christoph Deil) Date: Thu, 25 Dec 2014 23:02:26 +0100 Subject: [pytest-dev] Exemplary py.test usage in open-source? In-Reply-To: <20141125090340.GR25600@merlinux.eu> References: <539667A1C8758B488DEE4541DC352B4252BD6AAC@PHX-EXRDA-S61.corp.ebay.com> <20141124080400.GM25600@merlinux.eu> <539667A1C8758B488DEE4541DC352B4252BD6F00@PHX-EXRDA-S61.corp.ebay.com> <20141125090340.GR25600@merlinux.eu> Message-ID: <6FB3D4DE-E782-47A6-BAD8-3839DB34B2BB@gmail.com> > On 25 Nov 2014, at 10:03, holger krekel wrote: > > On Tue, Nov 25, 2014 at 00:05 +0000, Hashemi, Mahmoud wrote: >> Hehe, sorry for the top post, work computer. I did actually try searching conftest.py on Github, but there were altogether too many results :) Didn't seem like there was a solid sorting mechanism either. >> >> The Mozilla tests are particularly interesting because they use Python to test non-Python codebases, which is a major area of interest in the increasingly diverse PayPal engineering community. Those sorts of py.test conventions really resonate for the corporate/product types :) Also, small note, I'd probably update the Projects page link to the Mozilla Github so the code is more readily showcased. Otherwise, the Projects page is fantastic :) > > I happily accept PRs listing projects. Please add your own project contexts as > that would be interesting for others as well i guess :) > > As far as testing non-python codebases goes, these two integrate C/C++: > > https://pypi.python.org/pypi/pytest-cpp > http://pytest-c-testrunner.readthedocs.org/en/latest/ > > Many are using pytest to test command line scripts (which are not neccessarily > written in Python) and pytest itself has many tests written against it this way. > Didn't get around to release a "pytest-cli" plugin i have started somewhere. Hi Holger, I?m very interested in the ?pytest-cli? plugin you mentioned ? what does it do? I?ve started to write tests for command line tools (written in C++) and I?m simply using subprocess to execute them and then assert on their return code and generated stdout / stderr and files. It?s working OK, but so far I haven?t started running tests in parallel or had to handle processes that crash or hang, so if there?s a plugin to do this I?m sure it?s more robust and would like to give it a try. Cheers, Christoph > >> Hope to hear more soon! > > same here :) > > holger > _______________________________________________ > pytest-dev mailing list > pytest-dev at python.org > https://mail.python.org/mailman/listinfo/pytest-dev