From elin at splunk.com Tue Apr 2 19:20:51 2013 From: elin at splunk.com (Elizabeth Lin) Date: Tue, 2 Apr 2013 17:20:51 +0000 Subject: [pytest-dev] parametrizing some arguments require parametrization at collection phase and some at setup time? In-Reply-To: Message-ID: <1F76C1F02ED2C04685013385E4283980221FF4E0@mbx024-w1-ca-5.exch024.domain.local> Hi, I have some tests which I'd like to parametrize using both more complex fixtures as well as simple string arguments. How are folks doing this currently? Or is this a use case that hasn't been seen before? Using metafunc.parametrize in a pytest_generate_test hook won't work for me since I need the fixtures to have indirect=True to pass the argname as a request.param, but the other arguments to have indirect=False. For example, if I have a test fixture and test case which looks like the following: Any suggestions for how to accomplish this would be much appreciated! def pytest_generate_tests(metafunc): if metafunc.function.__name__ == 'test_example': argnames = [] argvalues = [] parameters = getattr(metafunc.function, 'paramlist', ()) for p in parameters: if type(p) == list: argnames = tuple(['myfixture'] + p) else: argvalues.append = tuple(['std'] + p['argvalues']) argvalues.append = tuple(['pro'] + p['argvalues']) # I want to do the following, but it won't work since some of the args need indirect set to true # and some need indirect set to false. metafunc.parametrize(argnames, argvalues, indirect=True) elif 'myfixture' in metafunc.fixturenames: # we have existing tests which use the fixture, but only with standard metafunc.parametrize("myfixture", "std") else: # we have existing tests which use older style parametrization, non-fixture for p in getattr(metafunc.function, 'paramlist', ()): metafunc.addcall(funcargs=p) def params(decolist): def wrapper(function): function.paramlist = decolist return function return wrapper @pytest.fixture def myfixture(request): If request.param == 'std': myfix = SomeObject() elif request.param == 'pro': myfix = SomeOtherObject() def fin(): myfix.close() request.addfinalizer(fin) return myfix @params([ ['color', 'type'], { 'argvalues': [ 'blue', 'cat'] }, { 'argvalues': ['pink', 'dog'] } ]) def test_example(myfixture, color, type): # this is the new test we want to add def test_something(myfixture): # existing test which only uses std fixture @params([ {'arg1': 1, 'arg2': 2}, {'arg1': 3, 'arg2': 5} ]) def test_old_style(arg1, arg2): # existing tests which don't use fixtures Thanks for reading through this! I know it's rather long. Cheers, Liz From holger at merlinux.eu Fri Apr 5 08:14:13 2013 From: holger at merlinux.eu (holger krekel) Date: Fri, 5 Apr 2013 06:14:13 +0000 Subject: [pytest-dev] [TIP] pytest - parametrizing some arguments require parametrization at collection phase and some at setup time? In-Reply-To: <1F76C1F02ED2C04685013385E428398022218E3B@mbx024-w1-ca-5.exch024.domain.local> References: <1F76C1F02ED2C04685013385E428398022218E3B@mbx024-w1-ca-5.exch024.domain.local> Message-ID: <20130405061413.GB19653@merlinux.eu> Hi Elizabeth, And sorry for taking a while but i am still not sure i fully understand. I am pretty sure we can find a solution but i'd like to find one that fits the problem :) Could you clarify your problem purely from the test function side? Particularly, if you have this test: @params([ ['color', 'type'], { 'argvalues': [ 'blue', 'cat'] }, { 'argvalues': ['pink', 'dog'] } ]) def test_example(myfixture, color, type): # this is the new test we want to add assert 0 do i understand it right that ``myfixture`` should be indirectly created by using ["std", "pro"] as respective parameters because there is a @params decorator? And that for def test_something(myfixture): # existing test which only uses std fixture assert 0 you only want ``myfixture`` created with the "std" parameter? And that for: @params([ {'arg1': 1, 'arg2': 2}, {'arg1': 3, 'arg2': 5} ]) def test_old_style(arg1, arg2): # existing tests which don't use fixtures assert 0 you don't want any "myfixture" created at all? cheers, holger On Thu, Apr 04, 2013 at 22:51 +0000, Elizabeth Lin wrote: > Hi, > > I have some tests which I'd like to parametrize using both more complex > fixtures as well as simple string arguments. How are folks doing this > currently? Or is this a use case that hasn't been seen before? Using > metafunc.parametrize in a pytest_generate_test hook won't work for me > since I need the fixtures to have indirect=True to pass the argname as a > request.param, but the other arguments to have indirect=False. > > For example, if I have a test fixture and test case which looks like the > following: > Any suggestions for how to accomplish this would be much appreciated! > > > def pytest_generate_tests(metafunc): > if metafunc.function.__name__ == 'test_example': > argnames = [] > argvalues = [] > parameters = getattr(metafunc.function, 'paramlist', ()) > for p in parameters: > if type(p) == list: > argnames = tuple(['myfixture'] + p) > else: > argvalues.append = tuple(['std'] + p['argvalues']) > argvalues.append = tuple(['pro'] + p['argvalues']) > # I want to do the following, but it won't work since some of the > # args need indirect set to true > # and some need indirect set to false. > metafunc.parametrize(argnames, argvalues, indirect=True) > elif 'myfixture' in metafunc.fixturenames: > # we have existing tests which use the fixture, but only with std > metafunc.parametrize("myfixture", "std") > else: > # we have existing tests which use older style parametrization, > # non-fixture > for p in getattr(metafunc.function, 'paramlist', ()): > metafunc.addcall(funcargs=p) > > > def params(decolist): > def wrapper(function): > function.paramlist = decolist > return function > return wrapper > > @pytest.fixture > def myfixture(request): > if request.param == 'std': > myfix = SomeObject() > elif request.param == 'pro': > myfix = SomeOtherObject() > def fin(): > myfix.close() > request.addfinalizer(fin) > return myfix > > @params([ > ['color', 'type'], > { 'argvalues': [ 'blue', 'cat'] }, > { 'argvalues': ['pink', 'dog'] } > ]) > def test_example(myfixture, color, type): > # this is the new test we want to add > > def test_something(myfixture): > # existing test which only uses std fixture > > @params([ > {'arg1': 1, 'arg2': 2}, > {'arg1': 3, 'arg2': 5} > ]) > def test_old_style(arg1, arg2): > # existing tests which don't use fixtures > > > Thanks for reading through this! I know it's rather long. > > Cheers, > Liz > > > > > _______________________________________________ > testing-in-python mailing list > testing-in-python at lists.idyll.org > http://lists.idyll.org/listinfo/testing-in-python > From elin at splunk.com Fri Apr 5 19:08:26 2013 From: elin at splunk.com (Elizabeth Lin) Date: Fri, 5 Apr 2013 17:08:26 +0000 Subject: [pytest-dev] [TIP] pytest - parametrizing some arguments require parametrization at collection phase and some at setup time? In-Reply-To: <20130405061413.GB19653@merlinux.eu> Message-ID: <1F76C1F02ED2C04685013385E4283980222226EF@mbx024-w1-ca-5.exch024.domain.local> Hi Holger, Thanks for responding! Comments inline below. On 4/4/13 11:14 PM, "holger krekel" wrote: >Hi Elizabeth, > >And sorry for taking a while but i am still not sure i fully understand. >I am pretty sure we can find a solution but i'd like to find one that >fits the problem :) > >Could you clarify your problem purely from the test function side? >Particularly, if you have this test: > > @params([ > ['color', 'type'], > { 'argvalues': [ 'blue', 'cat'] }, > { 'argvalues': ['pink', 'dog'] } > ]) > def test_example(myfixture, color, type): > # this is the new test we want to add > assert 0 > >do i understand it right that ``myfixture`` should be indirectly >created by using ["std", "pro"] as respective parameters because >there is a @params decorator? And that for Yes, myfixture should be indirectly created when we call metafunc.parametrize to pass in ["std", "pro"] as parameters, but only for specific tests - in this case the tests generated should be: - std, blue, cat - std, pink, dog - pro, blue, cat - pro, pink dog > > def test_something(myfixture): > # existing test which only uses std fixture > assert 0 > >you only want ``myfixture`` created with the "std" parameter? Yes, that's correct. So only test should be - std > >And that for: > > @params([ > {'arg1': 1, 'arg2': 2}, > {'arg1': 3, 'arg2': 5} > ]) > def test_old_style(arg1, arg2): > # existing tests which don't use fixtures > assert 0 > >you don't want any "myfixture" created at all? Also correct. Generated tests should be - 1, 2 - 3, 5 Cheers, Liz > > >cheers, >holger > > > >On Thu, Apr 04, 2013 at 22:51 +0000, Elizabeth Lin wrote: > >> Hi, >> >> I have some tests which I'd like to parametrize using both more complex >> fixtures as well as simple string arguments. How are folks doing this >> currently? Or is this a use case that hasn't been seen before? Using >> metafunc.parametrize in a pytest_generate_test hook won't work for me >> since I need the fixtures to have indirect=True to pass the argname as a >> request.param, but the other arguments to have indirect=False. >> >> For example, if I have a test fixture and test case which looks like the >> following: >> Any suggestions for how to accomplish this would be much appreciated! >> >> >> def pytest_generate_tests(metafunc): >> if metafunc.function.__name__ == 'test_example': >> argnames = [] >> argvalues = [] >> parameters = getattr(metafunc.function, 'paramlist', ()) >> for p in parameters: >> if type(p) == list: >> argnames = tuple(['myfixture'] + p) >> else: >> argvalues.append = tuple(['std'] + p['argvalues']) >> argvalues.append = tuple(['pro'] + p['argvalues']) >> # I want to do the following, but it won't work since some of the >> # args need indirect set to true >> # and some need indirect set to false. >> metafunc.parametrize(argnames, argvalues, indirect=True) >> elif 'myfixture' in metafunc.fixturenames: >> # we have existing tests which use the fixture, but only with std >> metafunc.parametrize("myfixture", "std") >> else: >> # we have existing tests which use older style parametrization, >> # non-fixture >> for p in getattr(metafunc.function, 'paramlist', ()): >> metafunc.addcall(funcargs=p) >> >> >> def params(decolist): >> def wrapper(function): >> function.paramlist = decolist >> return function >> return wrapper >> >> @pytest.fixture >> def myfixture(request): >> if request.param == 'std': >> myfix = SomeObject() >> elif request.param == 'pro': >> myfix = SomeOtherObject() >> def fin(): >> myfix.close() >> request.addfinalizer(fin) >> return myfix >> >> @params([ >> ['color', 'type'], >> { 'argvalues': [ 'blue', 'cat'] }, >> { 'argvalues': ['pink', 'dog'] } >> ]) >> def test_example(myfixture, color, type): >> # this is the new test we want to add >> >> def test_something(myfixture): >> # existing test which only uses std fixture >> >> @params([ >> {'arg1': 1, 'arg2': 2}, >> {'arg1': 3, 'arg2': 5} >> ]) >> def test_old_style(arg1, arg2): >> # existing tests which don't use fixtures >> >> >> Thanks for reading through this! I know it's rather long. >> >> Cheers, >> Liz >> >> >> >> >> _______________________________________________ >> testing-in-python mailing list >> testing-in-python at lists.idyll.org >> http://lists.idyll.org/listinfo/testing-in-python >> From holger at merlinux.eu Fri Apr 5 21:14:16 2013 From: holger at merlinux.eu (holger krekel) Date: Fri, 5 Apr 2013 19:14:16 +0000 Subject: [pytest-dev] pytest-xprocess-0.7: manage external processes across test runs Message-ID: <20130405191416.GI19653@merlinux.eu> I've just released a first experimental version of pytest-xprocess, version 0.7. It makes managing processes _across_ test runs easy. If you have a mysql, postgres, redis, etc. database or any other service that you like to start and initialize for your tests then you can use the plugin to keep this process alive, or to kill it. Moreover, any failing test will show the logfile lines that were written during the execution of the test. See https://pypi.python.org/pypi/pytest-xprocess/ for some more info. I've used this pattern myself to manage a test couchdb instance including basic initialization to avoid the overhead of starting it for new test runs. It works for my current usage but it's not very widely tested yet. Consider it alpha. cheers, holger From holger at merlinux.eu Fri Apr 5 21:43:33 2013 From: holger at merlinux.eu (holger krekel) Date: Fri, 5 Apr 2013 19:43:33 +0000 Subject: [pytest-dev] [TIP] pytest - parametrizing some arguments require parametrization at collection phase and some at setup time? In-Reply-To: <1F76C1F02ED2C04685013385E4283980222226EF@mbx024-w1-ca-5.exch024.domain.local> References: <20130405061413.GB19653@merlinux.eu> <1F76C1F02ED2C04685013385E4283980222226EF@mbx024-w1-ca-5.exch024.domain.local> Message-ID: <20130405194333.GJ19653@merlinux.eu> Hi Liz, below and attached is a solution which should fullfill your expectations and reports this on py.test --collectonly: minor deviation: The last two old-style ones are parametrized with your intended parameters, but the reported test id is collapsed. Lastly, i presume you are aware of @pytest.mark.parametrize, right? Your @params decorator looks like a version requiring more typing than neccessary to me :) HTH, holger import pytest def pytest_generate_tests(metafunc): hasmyfixture = "myfixture" in metafunc.fixturenames paramlist = getattr(metafunc.function, "paramlist", None) if hasmyfixture: argvalues = ["std"] if paramlist: argvalues.append("pro") metafunc.parametrize("myfixture", argvalues, indirect=True) if paramlist: if isinstance(paramlist[0], dict): # old-style for p in paramlist: metafunc.addcall(funcargs=p) else: assert isinstance(paramlist[0], list) argnames = paramlist[0] argvalues = [d["argvalues"] for d in paramlist[1:]] metafunc.parametrize(argnames, argvalues) def params(decolist): def wrapper(function): function.paramlist = decolist return function return wrapper class SomeObject: def close(self): print "closing" class SomeOtherObject(SomeObject): pass @pytest.fixture def myfixture(request): if request.param == 'std': myfix = SomeObject() elif request.param == 'pro': myfix = SomeOtherObject() else: assert 0, "unknown param" def fin(): myfix.close() request.addfinalizer(fin) return myfix @params([ ['color', 'type'], { 'argvalues': [ 'blue', 'cat'] }, { 'argvalues': ['pink', 'dog'] } ]) def test_example(myfixture, color, type): # this is the new test we want to add assert 0 def test_something(myfixture): # existing test which only uses std fixture assert 0 @params([ {'arg1': 1, 'arg2': 2}, {'arg1': 3, 'arg2': 5} ]) def test_old_style(arg1, arg2): # existing tests which don't use fixtures assert 0 On Fri, Apr 05, 2013 at 17:08 +0000, Elizabeth Lin wrote: > Hi Holger, > > Thanks for responding! Comments inline below. > > On 4/4/13 11:14 PM, "holger krekel" wrote: > > >Hi Elizabeth, > > > >And sorry for taking a while but i am still not sure i fully understand. > >I am pretty sure we can find a solution but i'd like to find one that > >fits the problem :) > > > >Could you clarify your problem purely from the test function side? > >Particularly, if you have this test: > > > > @params([ > > ['color', 'type'], > > { 'argvalues': [ 'blue', 'cat'] }, > > { 'argvalues': ['pink', 'dog'] } > > ]) > > def test_example(myfixture, color, type): > > # this is the new test we want to add > > assert 0 > > > >do i understand it right that ``myfixture`` should be indirectly > >created by using ["std", "pro"] as respective parameters because > >there is a @params decorator? And that for > > Yes, myfixture should be indirectly created when we call > metafunc.parametrize to pass in ["std", "pro"] as parameters, but only for > specific tests - in this case the tests generated should be: > - std, blue, cat > - std, pink, dog > - pro, blue, cat > - pro, pink dog > > > > > def test_something(myfixture): > > # existing test which only uses std fixture > > assert 0 > > > >you only want ``myfixture`` created with the "std" parameter? > > Yes, that's correct. So only test should be > - std > > > > >And that for: > > > > @params([ > > {'arg1': 1, 'arg2': 2}, > > {'arg1': 3, 'arg2': 5} > > ]) > > def test_old_style(arg1, arg2): > > # existing tests which don't use fixtures > > assert 0 > > > >you don't want any "myfixture" created at all? > > Also correct. Generated tests should be > - 1, 2 > - 3, 5 > > Cheers, > Liz > > > > > > >cheers, > >holger > > > > > > > >On Thu, Apr 04, 2013 at 22:51 +0000, Elizabeth Lin wrote: > > > >> Hi, > >> > >> I have some tests which I'd like to parametrize using both more complex > >> fixtures as well as simple string arguments. How are folks doing this > >> currently? Or is this a use case that hasn't been seen before? Using > >> metafunc.parametrize in a pytest_generate_test hook won't work for me > >> since I need the fixtures to have indirect=True to pass the argname as a > >> request.param, but the other arguments to have indirect=False. > >> > >> For example, if I have a test fixture and test case which looks like the > >> following: > >> Any suggestions for how to accomplish this would be much appreciated! > >> > >> > >> def pytest_generate_tests(metafunc): > >> if metafunc.function.__name__ == 'test_example': > >> argnames = [] > >> argvalues = [] > >> parameters = getattr(metafunc.function, 'paramlist', ()) > >> for p in parameters: > >> if type(p) == list: > >> argnames = tuple(['myfixture'] + p) > >> else: > >> argvalues.append = tuple(['std'] + p['argvalues']) > >> argvalues.append = tuple(['pro'] + p['argvalues']) > >> # I want to do the following, but it won't work since some of the > >> # args need indirect set to true > >> # and some need indirect set to false. > >> metafunc.parametrize(argnames, argvalues, indirect=True) > >> elif 'myfixture' in metafunc.fixturenames: > >> # we have existing tests which use the fixture, but only with std > >> metafunc.parametrize("myfixture", "std") > >> else: > >> # we have existing tests which use older style parametrization, > >> # non-fixture > >> for p in getattr(metafunc.function, 'paramlist', ()): > >> metafunc.addcall(funcargs=p) > >> > >> > >> def params(decolist): > >> def wrapper(function): > >> function.paramlist = decolist > >> return function > >> return wrapper > >> > >> @pytest.fixture > >> def myfixture(request): > >> if request.param == 'std': > >> myfix = SomeObject() > >> elif request.param == 'pro': > >> myfix = SomeOtherObject() > >> def fin(): > >> myfix.close() > >> request.addfinalizer(fin) > >> return myfix > >> > >> @params([ > >> ['color', 'type'], > >> { 'argvalues': [ 'blue', 'cat'] }, > >> { 'argvalues': ['pink', 'dog'] } > >> ]) > >> def test_example(myfixture, color, type): > >> # this is the new test we want to add > >> > >> def test_something(myfixture): > >> # existing test which only uses std fixture > >> > >> @params([ > >> {'arg1': 1, 'arg2': 2}, > >> {'arg1': 3, 'arg2': 5} > >> ]) > >> def test_old_style(arg1, arg2): > >> # existing tests which don't use fixtures > >> > >> > >> Thanks for reading through this! I know it's rather long. > >> > >> Cheers, > >> Liz > >> > >> > >> > >> > >> _______________________________________________ > >> testing-in-python mailing list > >> testing-in-python at lists.idyll.org > >> http://lists.idyll.org/listinfo/testing-in-python > >> > -------------- next part -------------- A non-text attachment was scrubbed... Name: test_params_liz.py Type: text/x-python Size: 1727 bytes Desc: not available URL: From elin at splunk.com Fri Apr 5 23:29:32 2013 From: elin at splunk.com (Elizabeth Lin) Date: Fri, 5 Apr 2013 21:29:32 +0000 Subject: [pytest-dev] [TIP] pytest - parametrizing some arguments require parametrization at collection phase and some at setup time? In-Reply-To: <20130405194333.GJ19653@merlinux.eu> Message-ID: <1F76C1F02ED2C04685013385E4283980222259EC@mbx024-w1-ca-5.exch024.domain.local> Thanks Holger - that works! We've been using pytest since the 0.9.2 days, which is why we have a lot of old tests which haven't yet been updated to use the later pytest features. Switching everything over to us @pytest.mark.parametrize as well as fixtures instead of funcargs and/or setup/teardown methods is something I'd like to do as soon as we get some of that mythical "downtime" between releases. :) Cheers, Liz On 4/5/13 12:43 PM, "holger krekel" wrote: >Hi Liz, > >below and attached is a solution which should fullfill your expectations >and reports this on py.test --collectonly: > > > > > > > > > > >minor deviation: The last two old-style ones are parametrized with your >intended parameters, but the reported test id is collapsed. > >Lastly, i presume you are aware of @pytest.mark.parametrize, right? >Your @params decorator looks like a version requiring more typing >than neccessary to me :) > >HTH, >holger > > >import pytest > >def pytest_generate_tests(metafunc): > hasmyfixture = "myfixture" in metafunc.fixturenames > paramlist = getattr(metafunc.function, "paramlist", None) > > if hasmyfixture: > argvalues = ["std"] > if paramlist: > argvalues.append("pro") > metafunc.parametrize("myfixture", argvalues, indirect=True) > > if paramlist: > if isinstance(paramlist[0], dict): > # old-style > for p in paramlist: > metafunc.addcall(funcargs=p) > else: > assert isinstance(paramlist[0], list) > argnames = paramlist[0] > argvalues = [d["argvalues"] for d in paramlist[1:]] > metafunc.parametrize(argnames, argvalues) > >def params(decolist): > def wrapper(function): > function.paramlist = decolist > return function > return wrapper > >class SomeObject: > def close(self): > print "closing" > >class SomeOtherObject(SomeObject): > pass > >@pytest.fixture >def myfixture(request): > if request.param == 'std': > myfix = SomeObject() > elif request.param == 'pro': > myfix = SomeOtherObject() > else: > assert 0, "unknown param" > def fin(): > myfix.close() > request.addfinalizer(fin) > return myfix > >@params([ > ['color', 'type'], > { 'argvalues': [ 'blue', 'cat'] }, > { 'argvalues': ['pink', 'dog'] } >]) >def test_example(myfixture, color, type): > # this is the new test we want to add > assert 0 > >def test_something(myfixture): > # existing test which only uses std fixture > assert 0 > >@params([ > {'arg1': 1, 'arg2': 2}, > {'arg1': 3, 'arg2': 5} >]) >def test_old_style(arg1, arg2): > # existing tests which don't use fixtures > assert 0 > > >On Fri, Apr 05, 2013 at 17:08 +0000, Elizabeth Lin wrote: >> Hi Holger, >> >> Thanks for responding! Comments inline below. >> >> On 4/4/13 11:14 PM, "holger krekel" wrote: >> >> >Hi Elizabeth, >> > >> >And sorry for taking a while but i am still not sure i fully >>understand. >> >I am pretty sure we can find a solution but i'd like to find one that >> >fits the problem :) >> > >> >Could you clarify your problem purely from the test function side? >> >Particularly, if you have this test: >> > >> > @params([ >> > ['color', 'type'], >> > { 'argvalues': [ 'blue', 'cat'] }, >> > { 'argvalues': ['pink', 'dog'] } >> > ]) >> > def test_example(myfixture, color, type): >> > # this is the new test we want to add >> > assert 0 >> > >> >do i understand it right that ``myfixture`` should be indirectly >> >created by using ["std", "pro"] as respective parameters because >> >there is a @params decorator? And that for >> >> Yes, myfixture should be indirectly created when we call >> metafunc.parametrize to pass in ["std", "pro"] as parameters, but only >>for >> specific tests - in this case the tests generated should be: >> - std, blue, cat >> - std, pink, dog >> - pro, blue, cat >> - pro, pink dog >> >> > >> > def test_something(myfixture): >> > # existing test which only uses std fixture >> > assert 0 >> > >> >you only want ``myfixture`` created with the "std" parameter? >> >> Yes, that's correct. So only test should be >> - std >> >> > >> >And that for: >> > >> > @params([ >> > {'arg1': 1, 'arg2': 2}, >> > {'arg1': 3, 'arg2': 5} >> > ]) >> > def test_old_style(arg1, arg2): >> > # existing tests which don't use fixtures >> > assert 0 >> > >> >you don't want any "myfixture" created at all? >> >> Also correct. Generated tests should be >> - 1, 2 >> - 3, 5 >> >> Cheers, >> Liz >> >> > >> > >> >cheers, >> >holger >> > >> > >> > >> >On Thu, Apr 04, 2013 at 22:51 +0000, Elizabeth Lin wrote: >> > >> >> Hi, >> >> >> >> I have some tests which I'd like to parametrize using both more >>complex >> >> fixtures as well as simple string arguments. How are folks doing >>this >> >> currently? Or is this a use case that hasn't been seen before? >>Using >> >> metafunc.parametrize in a pytest_generate_test hook won't work for me >> >> since I need the fixtures to have indirect=True to pass the argname >>as a >> >> request.param, but the other arguments to have indirect=False. >> >> >> >> For example, if I have a test fixture and test case which looks like >>the >> >> following: >> >> Any suggestions for how to accomplish this would be much appreciated! >> >> >> >> >> >> def pytest_generate_tests(metafunc): >> >> if metafunc.function.__name__ == 'test_example': >> >> argnames = [] >> >> argvalues = [] >> >> parameters = getattr(metafunc.function, 'paramlist', ()) >> >> for p in parameters: >> >> if type(p) == list: >> >> argnames = tuple(['myfixture'] + p) >> >> else: >> >> argvalues.append = tuple(['std'] + p['argvalues']) >> >> argvalues.append = tuple(['pro'] + p['argvalues']) >> >> # I want to do the following, but it won't work since some of the >> >> # args need indirect set to true >> >> # and some need indirect set to false. >> >> metafunc.parametrize(argnames, argvalues, indirect=True) >> >> elif 'myfixture' in metafunc.fixturenames: >> >> # we have existing tests which use the fixture, but only with std >> >> metafunc.parametrize("myfixture", "std") >> >> else: >> >> # we have existing tests which use older style parametrization, >> >> # non-fixture >> >> for p in getattr(metafunc.function, 'paramlist', ()): >> >> metafunc.addcall(funcargs=p) >> >> >> >> >> >> def params(decolist): >> >> def wrapper(function): >> >> function.paramlist = decolist >> >> return function >> >> return wrapper >> >> >> >> @pytest.fixture >> >> def myfixture(request): >> >> if request.param == 'std': >> >> myfix = SomeObject() >> >> elif request.param == 'pro': >> >> myfix = SomeOtherObject() >> >> def fin(): >> >> myfix.close() >> >> request.addfinalizer(fin) >> >> return myfix >> >> >> >> @params([ >> >> ['color', 'type'], >> >> { 'argvalues': [ 'blue', 'cat'] }, >> >> { 'argvalues': ['pink', 'dog'] } >> >> ]) >> >> def test_example(myfixture, color, type): >> >> # this is the new test we want to add >> >> >> >> def test_something(myfixture): >> >> # existing test which only uses std fixture >> >> >> >> @params([ >> >> {'arg1': 1, 'arg2': 2}, >> >> {'arg1': 3, 'arg2': 5} >> >> ]) >> >> def test_old_style(arg1, arg2): >> >> # existing tests which don't use fixtures >> >> >> >> >> >> Thanks for reading through this! I know it's rather long. >> >> >> >> Cheers, >> >> Liz >> >> >> >> >> >> >> >> >> >> _______________________________________________ >> >> testing-in-python mailing list >> >> testing-in-python at lists.idyll.org >> >> http://lists.idyll.org/listinfo/testing-in-python >> >> >> From onave at dyn.com Mon Apr 8 16:57:54 2013 From: onave at dyn.com (Ofer Nave) Date: Mon, 08 Apr 2013 10:57:54 -0400 Subject: [pytest-dev] fixture params must be immutable? Message-ID: <5162DAF2.3000703@dyn.com> Been using pytest for a few days and enjoying it greatly. However, the last bit of code I wrote just blew up with "INTERNALERROR> TypeError: unhashable type: 'dict'". It seems that if you parameterize a fixture function, it must be with a list of immutable values. This works: @pytest.fixture(params=[1, 2, 3]) def foo(request): return request.param def bar(foo): assert foo This doesn't: @pytest.fixture(params=[{a:1}, {a:2}, {a:3}]) def foo(request): return request.param def bar(foo): assert foo ...because dict is immutable. I didn't see this requirement mentioned in the docs, and I don't understand why it would be the case. Looking for enlightenment. -ofer From holger at merlinux.eu Tue Apr 9 11:50:53 2013 From: holger at merlinux.eu (holger krekel) Date: Tue, 9 Apr 2013 09:50:53 +0000 Subject: [pytest-dev] fixture params must be immutable? In-Reply-To: <5162DAF2.3000703@dyn.com> References: <5162DAF2.3000703@dyn.com> Message-ID: <20130409095053.GA19653@merlinux.eu> On Mon, Apr 08, 2013 at 10:57 -0400, Ofer Nave wrote: > Been using pytest for a few days and enjoying it greatly. However, > the last bit of code I wrote just blew up with "INTERNALERROR> > TypeError: unhashable type: 'dict'". > > It seems that if you parameterize a fixture function, it must be > with a list of immutable values. This works: > > @pytest.fixture(params=[1, 2, 3]) > def foo(request): > return request.param > def bar(foo): > assert foo > > This doesn't: > > @pytest.fixture(params=[{a:1}, {a:2}, {a:3}]) > def foo(request): > return request.param > def bar(foo): > assert foo > > ...because dict is immutable. > > I didn't see this requirement mentioned in the docs, and I don't > understand why it would be the case. Looking for enlightenment. The parametrization implementation needs to be refactored to avoid this problem. There is also a related issue here suffering from a similar shortcoming: https://bitbucket.org/hpk42/pytest/issue/290/ the refactoring needs to avoid doing anything with the values (like hashing, putting them in a dict etc.) but rather work with indexes into the param values set. Maybe best open an issue, and reference issue290, so we don't forget it. Not bound to tackle this myself, however, in the next weeks i think. best, holger > -ofer > _______________________________________________ > Pytest-dev mailing list > Pytest-dev at python.org > http://mail.python.org/mailman/listinfo/pytest-dev > From issues-reply at bitbucket.org Tue Apr 9 16:08:06 2013 From: issues-reply at bitbucket.org (Andreas Pelme) Date: Tue, 09 Apr 2013 14:08:06 -0000 Subject: [pytest-dev] [hpk42/pytest-cache] setup.py license (issue #7) Message-ID: <20130409140806.10278.71252@bitbucket23.managed.contegix.com> New issue 7: setup.py license https://bitbucket.org/hpk42/pytest-cache/issue/7/setuppy-license Andreas Pelme: I believe to intended license for pytest-cache is MIT (as per LICENSE), but setup.py says GPL, which also shows on PyPI. The setup.py file should probably be changed. -- This is an issue notification from bitbucket.org. You are receiving this either because you are the owner of the issue, or you are following the issue. From onave at dyn.com Tue Apr 16 21:39:54 2013 From: onave at dyn.com (Ofer Nave) Date: Tue, 16 Apr 2013 15:39:54 -0400 Subject: [pytest-dev] excluding marked tests as default behavior Message-ID: <516DA90A.9090108@dyn.com> I understand I can mark tests with `@pytest.mark.whatever` and run them specifically with `pytest -m whatever`, or run skip them with `pytest -m 'not whatever'`. But how I can configure pytest in my package such that the default behavior is to skip those tests? Specifically, I have some tests that are very slow (multiple seconds each). I want to mark them 'slow', and have the default behavior when running `pytest` be to skip them. That way they will only run if you explicitly run them with `pytest -m slow`. Is there a way to configure this in conftest.py? -ofer From onave at dyn.com Tue Apr 16 21:49:48 2013 From: onave at dyn.com (Ofer Nave) Date: Tue, 16 Apr 2013 15:49:48 -0400 Subject: [pytest-dev] excluding marked tests as default behavior In-Reply-To: <516DA90A.9090108@dyn.com> References: <516DA90A.9090108@dyn.com> Message-ID: <516DAB5C.6020809@dyn.com> I do have a working implementation with the following: 1) Create a marker: slow = pytest.mark.skipif("'SLOW' not in os.environ") 2) Mark appropriate tests: @slow def test_which_is_slow(): ... 3) Turn on slow tests from command line: $ SLOW=1 py.test Not exactly the form factor I was looking for, but it does work. -ofer On 04/16/2013 03:39 PM, Ofer Nave wrote: > I understand I can mark tests with `@pytest.mark.whatever` and run > them specifically with `pytest -m whatever`, or run skip them with > `pytest -m 'not whatever'`. But how I can configure pytest in my > package such that the default behavior is to skip those tests? > > Specifically, I have some tests that are very slow (multiple seconds > each). I want to mark them 'slow', and have the default behavior when > running `pytest` be to skip them. That way they will only run if you > explicitly run them with `pytest -m slow`. > > Is there a way to configure this in conftest.py? > > -ofer > _______________________________________________ > Pytest-dev mailing list > Pytest-dev at python.org > http://mail.python.org/mailman/listinfo/pytest-dev From holger at merlinux.eu Tue Apr 16 22:09:04 2013 From: holger at merlinux.eu (holger krekel) Date: Tue, 16 Apr 2013 20:09:04 +0000 Subject: [pytest-dev] excluding marked tests as default behavior In-Reply-To: <516DAB5C.6020809@dyn.com> References: <516DA90A.9090108@dyn.com> <516DAB5C.6020809@dyn.com> Message-ID: <20130416200904.GF5855@merlinux.eu> Hi Ofer, maybe this solution is more to your liking? http://pytest.org/latest/example/simple.html#control-skipping-of-tests-according-to-command-line-option cheers, holger On Tue, Apr 16, 2013 at 15:49 -0400, Ofer Nave wrote: > I do have a working implementation with the following: > > 1) Create a marker: > > slow = pytest.mark.skipif("'SLOW' not in os.environ") > > 2) Mark appropriate tests: > > @slow > def test_which_is_slow(): > ... > > 3) Turn on slow tests from command line: > > $ SLOW=1 py.test > > Not exactly the form factor I was looking for, but it does work. > > -ofer > > On 04/16/2013 03:39 PM, Ofer Nave wrote: > >I understand I can mark tests with `@pytest.mark.whatever` and run > >them specifically with `pytest -m whatever`, or run skip them with > >`pytest -m 'not whatever'`. But how I can configure pytest in my > >package such that the default behavior is to skip those tests? > > > >Specifically, I have some tests that are very slow (multiple > >seconds each). I want to mark them 'slow', and have the default > >behavior when running `pytest` be to skip them. That way they > >will only run if you explicitly run them with `pytest -m slow`. > > > >Is there a way to configure this in conftest.py? > > > >-ofer > >_______________________________________________ > >Pytest-dev mailing list > >Pytest-dev at python.org > >http://mail.python.org/mailman/listinfo/pytest-dev > > _______________________________________________ > Pytest-dev mailing list > Pytest-dev at python.org > http://mail.python.org/mailman/listinfo/pytest-dev > From onave at dyn.com Tue Apr 16 22:16:16 2013 From: onave at dyn.com (Ofer Nave) Date: Tue, 16 Apr 2013 16:16:16 -0400 Subject: [pytest-dev] excluding marked tests as default behavior In-Reply-To: <20130416200904.GF5855@merlinux.eu> References: <516DA90A.9090108@dyn.com> <516DAB5C.6020809@dyn.com> <20130416200904.GF5855@merlinux.eu> Message-ID: <516DB190.3080509@dyn.com> That's exactly what I was looking for -- thanks! -ofer On 04/16/2013 04:09 PM, holger krekel wrote: > Hi Ofer, > > maybe this solution is more to your liking? > > http://pytest.org/latest/example/simple.html#control-skipping-of-tests-according-to-command-line-option > > cheers, > holger > > > On Tue, Apr 16, 2013 at 15:49 -0400, Ofer Nave wrote: >> I do have a working implementation with the following: >> >> 1) Create a marker: >> >> slow = pytest.mark.skipif("'SLOW' not in os.environ") >> >> 2) Mark appropriate tests: >> >> @slow >> def test_which_is_slow(): >> ... >> >> 3) Turn on slow tests from command line: >> >> $ SLOW=1 py.test >> >> Not exactly the form factor I was looking for, but it does work. >> >> -ofer >> >> On 04/16/2013 03:39 PM, Ofer Nave wrote: >>> I understand I can mark tests with `@pytest.mark.whatever` and run >>> them specifically with `pytest -m whatever`, or run skip them with >>> `pytest -m 'not whatever'`. But how I can configure pytest in my >>> package such that the default behavior is to skip those tests? >>> >>> Specifically, I have some tests that are very slow (multiple >>> seconds each). I want to mark them 'slow', and have the default >>> behavior when running `pytest` be to skip them. That way they >>> will only run if you explicitly run them with `pytest -m slow`. >>> >>> Is there a way to configure this in conftest.py? >>> >>> -ofer >>> _______________________________________________ >>> Pytest-dev mailing list >>> Pytest-dev at python.org >>> http://mail.python.org/mailman/listinfo/pytest-dev >> _______________________________________________ >> Pytest-dev mailing list >> Pytest-dev at python.org >> http://mail.python.org/mailman/listinfo/pytest-dev >> From onave at dyn.com Wed Apr 17 15:48:49 2013 From: onave at dyn.com (Ofer Nave) Date: Wed, 17 Apr 2013 09:48:49 -0400 Subject: [pytest-dev] pytest-cov +gevent reporting incorrect results Message-ID: <516EA841.9080605@dyn.com> Anyone have experience using pytest-cov on a codebase that makes use of gevent greenlets? I just started using pytest-cov for the first time yesterday, and thought it'd be fun to try to get my test coverage from 85% to 100%. However, the last few reportedly not-covered lines are, in fact, executed multiple times. I'm 100% certain of this (it's a critical section, the whole system depends on it, and I've added print statements just to be dumbly sure). The only possibility that comes to mind is that perhaps this is because those sections of code are in a spawned greenlet, and maybe that messes with how pytest-cov works -- can't be sure because I don't know how it works. :) -ofer From Ronny.Pfannschmidt at gmx.de Wed Apr 17 17:22:46 2013 From: Ronny.Pfannschmidt at gmx.de (Ronny Pfannschmidt) Date: Wed, 17 Apr 2013 17:22:46 +0200 Subject: [pytest-dev] pytest-cov +gevent reporting incorrect results In-Reply-To: <516EA841.9080605@dyn.com> References: <516EA841.9080605@dyn.com> Message-ID: <516EBE46.40606@gmx.de> Hi Ofer, last i worked on greenlets they created custom thread states for each greenlet i vaugely remember that the trace function used by coverage (which is used by pytest-cov is thread-specific im under the imppression, that this may be an issue with greenlets ot picking up trace functions however i don't have the time to test that hypothesis soonish a quick analysis however shows, that the greenlet module has a own settrace call you might want to experiment with having coverage use the settrace function of greenlet as well -- Ronny On 04/17/2013 03:48 PM, Ofer Nave wrote: > Anyone have experience using pytest-cov on a codebase that makes use of > gevent greenlets? > > I just started using pytest-cov for the first time yesterday, and > thought it'd be fun to try to get my test coverage from 85% to 100%. > However, the last few reportedly not-covered lines are, in fact, > executed multiple times. I'm 100% certain of this (it's a critical > section, the whole system depends on it, and I've added print statements > just to be dumbly sure). > > The only possibility that comes to mind is that perhaps this is because > those sections of code are in a spawned greenlet, and maybe that messes > with how pytest-cov works -- can't be sure because I don't know how it > works. :) > > -ofer > _______________________________________________ > Pytest-dev mailing list > Pytest-dev at python.org > http://mail.python.org/mailman/listinfo/pytest-dev From carpentier.th at gmail.com Thu Apr 18 11:24:50 2013 From: carpentier.th at gmail.com (Thomas CARPENTIER) Date: Thu, 18 Apr 2013 11:24:50 +0200 Subject: [pytest-dev] Questions about paramtrize and csv files Message-ID: Hi, I'm working with pytest since few weeks now for automate my functionnal tests using Webdriver and python. And i have some questions about how to paramatrize tests. After reading docs, i've setup my tests like : @pytest.mark.parametrize(("Url", "name", "LastName", "IdCustomer"), [ ("http://myurl.com", "myname", "mylastname", "12345"), ]) def test_get_infos_about_client(Url, name, Lastname, IdCustomer): #Do something with arguments [...] My problem is I've 2 testing environnements ENV1 and ENV2, however datas ( in particular IdCustomer are not the same between ENV1 and ENV2 ( these datas are generated during the creation of the Customer) So , I'm thinking about use csv files to inject datas in place of parametrize function. But I did'nt find any docs about it. Is it possible? I've tried def pytest_generate_tests(metafunc): #read csv file metafunc.parametrize(header, datas) But it doesn't work! Can I do the same thing with another method ? Thnaks for your help Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From holger at merlinux.eu Thu Apr 18 11:35:18 2013 From: holger at merlinux.eu (holger krekel) Date: Thu, 18 Apr 2013 09:35:18 +0000 Subject: [pytest-dev] Questions about paramtrize and csv files In-Reply-To: References: Message-ID: <20130418093518.GJ5855@merlinux.eu> hi Thomas, On Thu, Apr 18, 2013 at 11:24 +0200, Thomas CARPENTIER wrote: > Hi, > > I'm working with pytest since few weeks now for automate my functionnal > tests using Webdriver and python. And i have some questions about how to > paramatrize tests. > > After reading docs, i've setup my tests like : > > @pytest.mark.parametrize(("Url", "name", "LastName", "IdCustomer"), [ > ("http://myurl.com", "myname", "mylastname", "12345"), > ]) > def test_get_infos_about_client(Url, name, Lastname, IdCustomer): > #Do something with arguments > [...] > > > My problem is I've 2 testing environnements ENV1 and ENV2, however datas ( > in particular IdCustomer are not the same between ENV1 and ENV2 ( these > datas are generated during the creation of the Customer) > > So , I'm thinking about use csv files to inject datas in place of > parametrize function. But I did'nt find any docs about it. Is it possible? > > I've tried > > def pytest_generate_tests(metafunc): > #read csv file > metafunc.parametrize(header, datas) > > But it doesn't work! > > Can I do the same thing with another method ? usually pytest_generate_tests is the right place to perform config-dependent parametrization. Could you post what you tried there concretely including failure tracebacks? holger > Thnaks for your help > > > Thomas > _______________________________________________ > Pytest-dev mailing list > Pytest-dev at python.org > http://mail.python.org/mailman/listinfo/pytest-dev From carpentier.th at gmail.com Thu Apr 18 12:19:40 2013 From: carpentier.th at gmail.com (Thomas CARPENTIER) Date: Thu, 18 Apr 2013 12:19:40 +0200 Subject: [pytest-dev] Questions about paramtrize and csv files In-Reply-To: <20130418093518.GJ5855@merlinux.eu> References: <20130418093518.GJ5855@merlinux.eu> Message-ID: Hi Holger, here you can find my code : http://pastebin.com/fmitjbw2 and here the csv : http://pastebin.com/UHv2XPt5 the tracebacks : http://pastebin.com/E5p3tLCR thanks. --------------------- Thomas CARPENTIER 155 Rue Fleury 92140 CLAMART Tel : 06.18.09.10.97 Mail : carpentier.th at gmail.com On Thu, Apr 18, 2013 at 11:35 AM, holger krekel wrote: > hi Thomas, > > On Thu, Apr 18, 2013 at 11:24 +0200, Thomas CARPENTIER wrote: > > Hi, > > > > I'm working with pytest since few weeks now for automate my functionnal > > tests using Webdriver and python. And i have some questions about how to > > paramatrize tests. > > > > After reading docs, i've setup my tests like : > > > > @pytest.mark.parametrize(("Url", "name", "LastName", "IdCustomer"), [ > > ("http://myurl.com", "myname", "mylastname", "12345"), > > ]) > > def test_get_infos_about_client(Url, name, Lastname, IdCustomer): > > #Do something with arguments > > [...] > > > > > > My problem is I've 2 testing environnements ENV1 and ENV2, however datas > ( > > in particular IdCustomer are not the same between ENV1 and ENV2 ( these > > datas are generated during the creation of the Customer) > > > > So , I'm thinking about use csv files to inject datas in place of > > parametrize function. But I did'nt find any docs about it. Is it > possible? > > > > I've tried > > > > def pytest_generate_tests(metafunc): > > #read csv file > > metafunc.parametrize(header, datas) > > > > But it doesn't work! > > > > Can I do the same thing with another method ? > > > usually pytest_generate_tests is the right place to perform > config-dependent parametrization. Could you post what you > tried there concretely including failure tracebacks? > > holger > > > > Thnaks for your help > > > > > > Thomas > > > _______________________________________________ > > Pytest-dev mailing list > > Pytest-dev at python.org > > http://mail.python.org/mailman/listinfo/pytest-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From holger at merlinux.eu Thu Apr 18 12:34:26 2013 From: holger at merlinux.eu (holger krekel) Date: Thu, 18 Apr 2013 10:34:26 +0000 Subject: [pytest-dev] Questions about paramtrize and csv files In-Reply-To: References: <20130418093518.GJ5855@merlinux.eu> Message-ID: <20130418103426.GK5855@merlinux.eu> hi Thomas, On Thu, Apr 18, 2013 at 12:19 +0200, Thomas CARPENTIER wrote: > Hi Holger, > > > here you can find my code : http://pastebin.com/fmitjbw2 > > and here the csv : http://pastebin.com/UHv2XPt5 > > the tracebacks : http://pastebin.com/E5p3tLCR > > thanks. If those 44 "datas" rows each contain 4-tuples you can just call metafunc.parametrize(("url", "name", "lastname", "idCustomer"), datas) The extra [] list around "datas" leads to the error. holger > > > > --------------------- > Thomas CARPENTIER > 155 Rue Fleury > 92140 CLAMART > > Tel : 06.18.09.10.97 > Mail : carpentier.th at gmail.com > > > > On Thu, Apr 18, 2013 at 11:35 AM, holger krekel wrote: > > > hi Thomas, > > > > On Thu, Apr 18, 2013 at 11:24 +0200, Thomas CARPENTIER wrote: > > > Hi, > > > > > > I'm working with pytest since few weeks now for automate my functionnal > > > tests using Webdriver and python. And i have some questions about how to > > > paramatrize tests. > > > > > > After reading docs, i've setup my tests like : > > > > > > @pytest.mark.parametrize(("Url", "name", "LastName", "IdCustomer"), [ > > > ("http://myurl.com", "myname", "mylastname", "12345"), > > > ]) > > > def test_get_infos_about_client(Url, name, Lastname, IdCustomer): > > > #Do something with arguments > > > [...] > > > > > > > > > My problem is I've 2 testing environnements ENV1 and ENV2, however datas > > ( > > > in particular IdCustomer are not the same between ENV1 and ENV2 ( these > > > datas are generated during the creation of the Customer) > > > > > > So , I'm thinking about use csv files to inject datas in place of > > > parametrize function. But I did'nt find any docs about it. Is it > > possible? > > > > > > I've tried > > > > > > def pytest_generate_tests(metafunc): > > > #read csv file > > > metafunc.parametrize(header, datas) > > > > > > But it doesn't work! > > > > > > Can I do the same thing with another method ? > > > > > > usually pytest_generate_tests is the right place to perform > > config-dependent parametrization. Could you post what you > > tried there concretely including failure tracebacks? > > > > holger > > > > > > > Thnaks for your help > > > > > > > > > Thomas > > > > > _______________________________________________ > > > Pytest-dev mailing list > > > Pytest-dev at python.org > > > http://mail.python.org/mailman/listinfo/pytest-dev > > > > From carpentier.th at gmail.com Thu Apr 18 12:51:53 2013 From: carpentier.th at gmail.com (Thomas CARPENTIER) Date: Thu, 18 Apr 2013 12:51:53 +0200 Subject: [pytest-dev] Questions about paramtrize and csv files In-Reply-To: <20130418103426.GK5855@merlinux.eu> References: <20130418093518.GJ5855@merlinux.eu> <20130418103426.GK5855@merlinux.eu> Message-ID: Thanks but I still have the same errors :( I will tried something else .... --------------------- Thomas CARPENTIER 155 Rue Fleury 92140 CLAMART Tel : 06.18.09.10.97 Mail : carpentier.th at gmail.com On Thu, Apr 18, 2013 at 12:34 PM, holger krekel wrote: > hi Thomas, > > On Thu, Apr 18, 2013 at 12:19 +0200, Thomas CARPENTIER wrote: > > Hi Holger, > > > > > > here you can find my code : http://pastebin.com/fmitjbw2 > > > > and here the csv : http://pastebin.com/UHv2XPt5 > > > > the tracebacks : http://pastebin.com/E5p3tLCR > > > > thanks. > > If those 44 "datas" rows each contain 4-tuples you can just call > > metafunc.parametrize(("url", "name", "lastname", "idCustomer"), datas) > > The extra [] list around "datas" leads to the error. > > holger > > > > > > > > > --------------------- > > Thomas CARPENTIER > > 155 Rue Fleury > > 92140 CLAMART > > > > Tel : 06.18.09.10.97 > > Mail : carpentier.th at gmail.com > > > > > > > > On Thu, Apr 18, 2013 at 11:35 AM, holger krekel > wrote: > > > > > hi Thomas, > > > > > > On Thu, Apr 18, 2013 at 11:24 +0200, Thomas CARPENTIER wrote: > > > > Hi, > > > > > > > > I'm working with pytest since few weeks now for automate my > functionnal > > > > tests using Webdriver and python. And i have some questions about > how to > > > > paramatrize tests. > > > > > > > > After reading docs, i've setup my tests like : > > > > > > > > @pytest.mark.parametrize(("Url", "name", "LastName", "IdCustomer"), [ > > > > ("http://myurl.com", "myname", "mylastname", "12345"), > > > > ]) > > > > def test_get_infos_about_client(Url, name, Lastname, IdCustomer): > > > > #Do something with arguments > > > > [...] > > > > > > > > > > > > My problem is I've 2 testing environnements ENV1 and ENV2, however > datas > > > ( > > > > in particular IdCustomer are not the same between ENV1 and ENV2 ( > these > > > > datas are generated during the creation of the Customer) > > > > > > > > So , I'm thinking about use csv files to inject datas in place of > > > > parametrize function. But I did'nt find any docs about it. Is it > > > possible? > > > > > > > > I've tried > > > > > > > > def pytest_generate_tests(metafunc): > > > > #read csv file > > > > metafunc.parametrize(header, datas) > > > > > > > > But it doesn't work! > > > > > > > > Can I do the same thing with another method ? > > > > > > > > > usually pytest_generate_tests is the right place to perform > > > config-dependent parametrization. Could you post what you > > > tried there concretely including failure tracebacks? > > > > > > holger > > > > > > > > > > Thnaks for your help > > > > > > > > > > > > Thomas > > > > > > > _______________________________________________ > > > > Pytest-dev mailing list > > > > Pytest-dev at python.org > > > > http://mail.python.org/mailman/listinfo/pytest-dev > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From holger at merlinux.eu Thu Apr 18 13:24:57 2013 From: holger at merlinux.eu (holger krekel) Date: Thu, 18 Apr 2013 11:24:57 +0000 Subject: [pytest-dev] Questions about paramtrize and csv files In-Reply-To: References: <20130418093518.GJ5855@merlinux.eu> <20130418103426.GK5855@merlinux.eu> Message-ID: <20130418112457.GL5855@merlinux.eu> On Thu, Apr 18, 2013 at 12:51 +0200, Thomas CARPENTIER wrote: > Thanks but I still have the same errors :( Here is a simple test module that works: def pytest_generate_tests(metafunc): datas = [("http://something", "hello", "world", "123123")] * 44 metafunc.parametrize(("url", "name", "lastname", "idCustomer"), datas) def test_data(url, name, lastname, idCustomer): assert 0, locals() This will run 44 tests as expected. And of course you are free to generate your "datas" in whichever way you like. holger > I will tried something else .... > > --------------------- > Thomas CARPENTIER > 155 Rue Fleury > 92140 CLAMART > > Tel : 06.18.09.10.97 > Mail : carpentier.th at gmail.com > > > > On Thu, Apr 18, 2013 at 12:34 PM, holger krekel wrote: > > > hi Thomas, > > > > On Thu, Apr 18, 2013 at 12:19 +0200, Thomas CARPENTIER wrote: > > > Hi Holger, > > > > > > > > > here you can find my code : http://pastebin.com/fmitjbw2 > > > > > > and here the csv : http://pastebin.com/UHv2XPt5 > > > > > > the tracebacks : http://pastebin.com/E5p3tLCR > > > > > > thanks. > > > > If those 44 "datas" rows each contain 4-tuples you can just call > > > > metafunc.parametrize(("url", "name", "lastname", "idCustomer"), datas) > > > > The extra [] list around "datas" leads to the error. > > > > holger > > > > > > > > > > > > > > --------------------- > > > Thomas CARPENTIER > > > 155 Rue Fleury > > > 92140 CLAMART > > > > > > Tel : 06.18.09.10.97 > > > Mail : carpentier.th at gmail.com > > > > > > > > > > > > On Thu, Apr 18, 2013 at 11:35 AM, holger krekel > > wrote: > > > > > > > hi Thomas, > > > > > > > > On Thu, Apr 18, 2013 at 11:24 +0200, Thomas CARPENTIER wrote: > > > > > Hi, > > > > > > > > > > I'm working with pytest since few weeks now for automate my > > functionnal > > > > > tests using Webdriver and python. And i have some questions about > > how to > > > > > paramatrize tests. > > > > > > > > > > After reading docs, i've setup my tests like : > > > > > > > > > > @pytest.mark.parametrize(("Url", "name", "LastName", "IdCustomer"), [ > > > > > ("http://myurl.com", "myname", "mylastname", "12345"), > > > > > ]) > > > > > def test_get_infos_about_client(Url, name, Lastname, IdCustomer): > > > > > #Do something with arguments > > > > > [...] > > > > > > > > > > > > > > > My problem is I've 2 testing environnements ENV1 and ENV2, however > > datas > > > > ( > > > > > in particular IdCustomer are not the same between ENV1 and ENV2 ( > > these > > > > > datas are generated during the creation of the Customer) > > > > > > > > > > So , I'm thinking about use csv files to inject datas in place of > > > > > parametrize function. But I did'nt find any docs about it. Is it > > > > possible? > > > > > > > > > > I've tried > > > > > > > > > > def pytest_generate_tests(metafunc): > > > > > #read csv file > > > > > metafunc.parametrize(header, datas) > > > > > > > > > > But it doesn't work! > > > > > > > > > > Can I do the same thing with another method ? > > > > > > > > > > > > usually pytest_generate_tests is the right place to perform > > > > config-dependent parametrization. Could you post what you > > > > tried there concretely including failure tracebacks? > > > > > > > > holger > > > > > > > > > > > > > Thnaks for your help > > > > > > > > > > > > > > > Thomas > > > > > > > > > _______________________________________________ > > > > > Pytest-dev mailing list > > > > > Pytest-dev at python.org > > > > > http://mail.python.org/mailman/listinfo/pytest-dev > > > > > > > > > > From carpentier.th at gmail.com Thu Apr 18 13:46:43 2013 From: carpentier.th at gmail.com (Thomas CARPENTIER) Date: Thu, 18 Apr 2013 13:46:43 +0200 Subject: [pytest-dev] Questions about paramtrize and csv files In-Reply-To: <20130418112457.GL5855@merlinux.eu> References: <20130418093518.GJ5855@merlinux.eu> <20130418103426.GK5855@merlinux.eu> <20130418112457.GL5855@merlinux.eu> Message-ID: Ok, so my problem is just about generating datas... Thanks for your help. Thomas --------------------- Thomas CARPENTIER 155 Rue Fleury 92140 CLAMART Tel : 06.18.09.10.97 Mail : carpentier.th at gmail.com On Thu, Apr 18, 2013 at 1:24 PM, holger krekel wrote: > On Thu, Apr 18, 2013 at 12:51 +0200, Thomas CARPENTIER wrote: > > Thanks but I still have the same errors :( > > Here is a simple test module that works: > > def pytest_generate_tests(metafunc): > datas = [("http://something", "hello", "world", "123123")] * 44 > metafunc.parametrize(("url", "name", "lastname", "idCustomer"), > datas) > > > def test_data(url, name, lastname, idCustomer): > assert 0, locals() > > This will run 44 tests as expected. And of course you are free to > generate your "datas" in whichever way you like. > > holger > > > > > > I will tried something else .... > > > > --------------------- > > Thomas CARPENTIER > > 155 Rue Fleury > > 92140 CLAMART > > > > Tel : 06.18.09.10.97 > > Mail : carpentier.th at gmail.com > > > > > > > > On Thu, Apr 18, 2013 at 12:34 PM, holger krekel > wrote: > > > > > hi Thomas, > > > > > > On Thu, Apr 18, 2013 at 12:19 +0200, Thomas CARPENTIER wrote: > > > > Hi Holger, > > > > > > > > > > > > here you can find my code : http://pastebin.com/fmitjbw2 > > > > > > > > and here the csv : http://pastebin.com/UHv2XPt5 > > > > > > > > the tracebacks : http://pastebin.com/E5p3tLCR > > > > > > > > thanks. > > > > > > If those 44 "datas" rows each contain 4-tuples you can just call > > > > > > metafunc.parametrize(("url", "name", "lastname", "idCustomer"), > datas) > > > > > > The extra [] list around "datas" leads to the error. > > > > > > holger > > > > > > > > > > > > > > > > > > > --------------------- > > > > Thomas CARPENTIER > > > > 155 Rue Fleury > > > > 92140 CLAMART > > > > > > > > Tel : 06.18.09.10.97 > > > > Mail : carpentier.th at gmail.com > > > > > > > > > > > > > > > > On Thu, Apr 18, 2013 at 11:35 AM, holger krekel > > > wrote: > > > > > > > > > hi Thomas, > > > > > > > > > > On Thu, Apr 18, 2013 at 11:24 +0200, Thomas CARPENTIER wrote: > > > > > > Hi, > > > > > > > > > > > > I'm working with pytest since few weeks now for automate my > > > functionnal > > > > > > tests using Webdriver and python. And i have some questions about > > > how to > > > > > > paramatrize tests. > > > > > > > > > > > > After reading docs, i've setup my tests like : > > > > > > > > > > > > @pytest.mark.parametrize(("Url", "name", "LastName", > "IdCustomer"), [ > > > > > > ("http://myurl.com", "myname", "mylastname", "12345"), > > > > > > ]) > > > > > > def test_get_infos_about_client(Url, name, Lastname, IdCustomer): > > > > > > #Do something with arguments > > > > > > [...] > > > > > > > > > > > > > > > > > > My problem is I've 2 testing environnements ENV1 and ENV2, > however > > > datas > > > > > ( > > > > > > in particular IdCustomer are not the same between ENV1 and ENV2 > ( > > > these > > > > > > datas are generated during the creation of the Customer) > > > > > > > > > > > > So , I'm thinking about use csv files to inject datas in place of > > > > > > parametrize function. But I did'nt find any docs about it. Is it > > > > > possible? > > > > > > > > > > > > I've tried > > > > > > > > > > > > def pytest_generate_tests(metafunc): > > > > > > #read csv file > > > > > > metafunc.parametrize(header, datas) > > > > > > > > > > > > But it doesn't work! > > > > > > > > > > > > Can I do the same thing with another method ? > > > > > > > > > > > > > > > usually pytest_generate_tests is the right place to perform > > > > > config-dependent parametrization. Could you post what you > > > > > tried there concretely including failure tracebacks? > > > > > > > > > > holger > > > > > > > > > > > > > > > > Thnaks for your help > > > > > > > > > > > > > > > > > > Thomas > > > > > > > > > > > _______________________________________________ > > > > > > Pytest-dev mailing list > > > > > > Pytest-dev at python.org > > > > > > http://mail.python.org/mailman/listinfo/pytest-dev > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicoddemus at gmail.com Fri Apr 19 02:46:35 2013 From: nicoddemus at gmail.com (Bruno Oliveira) Date: Thu, 18 Apr 2013 21:46:35 -0300 Subject: [pytest-dev] Announce: pytest-qt: fixture for testing PySide and PyQt applications Message-ID: Hello all, I would like to announce pytest-qt, which is a pytest plugin that helps writing tests for GUIs written in PySide or PyQt. The plugin introduces a new fixture, qtbot, which contains functions to send events to one or more widgets like a real user, allowing you to test GUI specific behaviors. Current release is 1.0 which contains the basic API, but more functionality to make GUI testing easier should be added in the future. Docs: https://pytest-qt.readthedocs.org Repo: https://github.com/nicoddemus/pytest-qt Best Regards, Bruno -------------- next part -------------- An HTML attachment was scrubbed... URL: From lklrmn at gmail.com Sat Apr 20 05:55:31 2013 From: lklrmn at gmail.com (Leah Klearman) Date: Fri, 19 Apr 2013 20:55:31 -0700 Subject: [pytest-dev] [pytest-rerunfailures] pytest-rerunfailures not using fixtures on reruns (#10) In-Reply-To: References: Message-ID: Hi Holger and other py.test mavens, Bob has reported a problem with my py.test plugin pytest-rerunfailures [1] not re-running the setup before rerunning the test. Looking at my code, it has pytest_runtest_protocol() [2] looping on _pytest.runner.runtestprotocol() [3], which in turn runs the setup, the test, and the teardown. [1] https://github.com/klrmn/pytest-rerunfailures [2] https://github.com/klrmn/pytest-rerunfailures/blob/master/rerunfailures/plugin.py#L46 [3] https://bitbucket.org/hpk42/pytest/src/fdc28ac2029f1c0be1dac4991c2f1b014c39a03f/_pytest/runner.py?at=default#cl-65 I haven't taken the hours needed to get my head fully into py.test plugin development mode, but I'm not sure I can implement a fix at my layer. I'm hoping someone here will have some insight. Thanks, -Leah On Sun, Apr 14, 2013 at 6:29 PM, Bob Silverberg wrote: > I just verified this behaviour myself with a simple test [1]. I see it > with both funcargs and fixtures, but I'm not sure if it has to do with the > plugin, or the way py.test works. It does inject the value into the test > method, but it doesn't rerun the fixture, so it seems like it is caching > the first run of the fixture and using that on subsequent runs. > > I'm not sure if this is something that the plugin can have any effect on, > or if it's just the way fixtures work. It is specified for this fixture > that it is scope='function', and perhaps py.test makes that happen by > checking the function name, which is, of course, the same for each run. I > did try removing the scope argument from the fixture but that had no > effect. > > Do you have any thoughts about this, @klrmn ? > > [1] https://gist.github.com/bobsilverberg/5385035 > > ? > Reply to this email directly or view it on GitHub > . > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ronny.Pfannschmidt at gmx.de Sat Apr 20 08:35:38 2013 From: Ronny.Pfannschmidt at gmx.de (Ronny Pfannschmidt) Date: Sat, 20 Apr 2013 08:35:38 +0200 (CEST) Subject: [pytest-dev] [pytest-rerunfailures] pytest-rerunfailures not using fixtures on reruns (#10) In-Reply-To: References: , Message-ID: An HTML attachment was scrubbed... URL: From brianna.laugher at gmail.com Mon Apr 22 07:48:44 2013 From: brianna.laugher at gmail.com (Brianna Laugher) Date: Mon, 22 Apr 2013 15:48:44 +1000 Subject: [pytest-dev] Using a context manager in a funcarg/fixture Message-ID: Hi folks, I posted this a couple of weeks ago and would appreciate if anyone can offer a useful answer. http://stackoverflow.com/questions/15801662/py-test-how-to-use-a-context-manager-in-a-funcarg-fixture I feel like some kind of fixture definition that involves a yield statement could be usefulfor this? cheers Brianna -- They've just been waiting in a mountain for the right moment: http://modernthings.org/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From brianna.laugher at gmail.com Mon Apr 22 08:05:48 2013 From: brianna.laugher at gmail.com (Brianna Laugher) Date: Mon, 22 Apr 2013 16:05:48 +1000 Subject: [pytest-dev] parametrize + xfail Message-ID: Hi again :) A common problem I have is that I have a test that is parametrized with py.test.mark.parametrize, I discover a bug, I want to add another test case for that bug and mark it as xfail. I have done something based on http://stackoverflow.com/questions/12364801/how-to-mark-some-generated-tests-as-xfail-skip for pytest_generate type functions, but it is awkward to do without disrupting the existing cases, and somewhat overkill in cases where the xfail is likely to be resolved (the bug is fixed) soon. With py.test.mark.parametrize, I notice https://bitbucket.org/hpk42/pytest/issue/124/allow-individual-parametrized-values-to-be I was just thinking about this now and I wonder if it is possible to build a decorator like this? @py.test.mark.parametrizexfail('duration', 'expectedBrackets'), [ (7, [None, None, None, 7]), (19, [None, 7, 6, 6]), ]) @py.test.mark.parametrize(('duration', 'expectedBrackets'), [ (24, [6, 6, 6, 6]), (23, [6, 6, 6, 6]), (25, [6, 6, 6, 6]), ]) So 5 cases would be fed into the test, with only the first two marked as xfail. Also while I'm at it, it could be good for pytest to issue a warning if someone uses a mark called parameterize, parametrise or parameterise, because I've been caught pondering why a mark wasn't working properly at least once :) cheers Brianna -- They've just been waiting in a mountain for the right moment: http://modernthings.org/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From holger at merlinux.eu Mon Apr 22 10:27:34 2013 From: holger at merlinux.eu (holger krekel) Date: Mon, 22 Apr 2013 08:27:34 +0000 Subject: [pytest-dev] [pytest-rerunfailures] pytest-rerunfailures not using fixtures on reruns (#10) In-Reply-To: References: Message-ID: <20130422082733.GQ5855@merlinux.eu> Hi Leah, Bob, On Fri, Apr 19, 2013 at 20:55 -0700, Leah Klearman wrote: > Hi Holger and other py.test mavens, > > Bob has reported a problem with my py.test plugin pytest-rerunfailures [1] > not re-running the setup before rerunning the test. > > Looking at my code, it has pytest_runtest_protocol() [2] looping > on _pytest.runner.runtestprotocol() [3], which in turn runs the setup, the > test, and the teardown. > > [1] https://github.com/klrmn/pytest-rerunfailures > [2] > https://github.com/klrmn/pytest-rerunfailures/blob/master/rerunfailures/plugin.py#L46 > [3] > https://bitbucket.org/hpk42/pytest/src/fdc28ac2029f1c0be1dac4991c2f1b014c39a03f/_pytest/runner.py?at=default#cl-65 > > I haven't taken the hours needed to get my head fully into py.test plugin > development mode, but I'm not sure I can implement a fix at my layer. > > > I'm hoping someone here will have some insight. It's a bit intricate. A function item keeps around some fixture state and was so far not intended to be run multiple times. I went ahead and tried to improve the behaviour to better allow re-running. Please try with pip install -i http://pypi.testrun.org -U pytest which should give you pytest-2.3.5.dev16 at least. This is bound to be released soon so quick feedback is welcome. If you still have problems please try to send a minimal test file which shows undesired behaviour. A word of warning: your calling of runtestprotocol() is not quite right and might lead to problems. "nextitem" should really be the item that is going to be run next. So if you re-run three times the first two invocations should have nextitem=item. best, holger > Thanks, > -Leah > > > On Sun, Apr 14, 2013 at 6:29 PM, Bob Silverberg wrote: > > > I just verified this behaviour myself with a simple test [1]. I see it > > with both funcargs and fixtures, but I'm not sure if it has to do with the > > plugin, or the way py.test works. It does inject the value into the test > > method, but it doesn't rerun the fixture, so it seems like it is caching > > the first run of the fixture and using that on subsequent runs. > > > > I'm not sure if this is something that the plugin can have any effect on, > > or if it's just the way fixtures work. It is specified for this fixture > > that it is scope='function', and perhaps py.test makes that happen by > > checking the function name, which is, of course, the same for each run. I > > did try removing the scope argument from the fixture but that had no > > effect. > > > > Do you have any thoughts about this, @klrmn ? > > > > [1] https://gist.github.com/bobsilverberg/5385035 > > > > ? > > Reply to this email directly or view it on GitHub > > . > > > _______________________________________________ > Pytest-dev mailing list > Pytest-dev at python.org > http://mail.python.org/mailman/listinfo/pytest-dev From lklrmn at gmail.com Tue Apr 23 00:08:44 2013 From: lklrmn at gmail.com (Leah Klearman) Date: Mon, 22 Apr 2013 15:08:44 -0700 Subject: [pytest-dev] [pytest-rerunfailures] pytest-rerunfailures not using fixtures on reruns (#10) In-Reply-To: <20130422082733.GQ5855@merlinux.eu> References: <20130422082733.GQ5855@merlinux.eu> Message-ID: Hey Holger, Thank you for the timely patch. I haven't seen Bob online today, but hopefully he'll be able to test it very soon. A word of warning: your calling of runtestprotocol() is not quite right > and might lead to problems. "nextitem" should really be the item that > is going to be run next. So if you re-run three times the first two > invocations should have nextitem=item. I agree that that would be ideal, but it only re-runs the test if it fails, so I would have to know the future in order to know what value to send nextitem=. Since hopefully most tests pass the first try, sending nextitem instead of item seems to be the better answer. Thanks! -Leah On Mon, Apr 22, 2013 at 1:27 AM, holger krekel wrote: > Hi Leah, Bob, > > On Fri, Apr 19, 2013 at 20:55 -0700, Leah Klearman wrote: > > Hi Holger and other py.test mavens, > > > > Bob has reported a problem with my py.test plugin pytest-rerunfailures > [1] > > not re-running the setup before rerunning the test. > > > > Looking at my code, it has pytest_runtest_protocol() [2] looping > > on _pytest.runner.runtestprotocol() [3], which in turn runs the setup, > the > > test, and the teardown. > > > > [1] https://github.com/klrmn/pytest-rerunfailures > > [2] > > > https://github.com/klrmn/pytest-rerunfailures/blob/master/rerunfailures/plugin.py#L46 > > [3] > > > https://bitbucket.org/hpk42/pytest/src/fdc28ac2029f1c0be1dac4991c2f1b014c39a03f/_pytest/runner.py?at=default#cl-65 > > > > I haven't taken the hours needed to get my head fully into py.test plugin > > development mode, but I'm not sure I can implement a fix at my layer. > > > > > > I'm hoping someone here will have some insight. > > It's a bit intricate. A function item keeps around some fixture state > and was so far not intended to be run multiple times. I went ahead and > tried to improve the behaviour to better allow re-running. Please try > with > > pip install -i http://pypi.testrun.org -U pytest > > which should give you pytest-2.3.5.dev16 at least. This is bound to be > released soon so quick feedback is welcome. If you still have problems > please try to send a minimal test file which shows undesired behaviour. > > A word of warning: your calling of runtestprotocol() is not quite right > and might lead to problems. "nextitem" should really be the item that > is going to be run next. So if you re-run three times the first two > invocations should have nextitem=item. > > best, > holger > > > > Thanks, > > -Leah > > > > > > On Sun, Apr 14, 2013 at 6:29 PM, Bob Silverberg < > notifications at github.com>wrote: > > > > > I just verified this behaviour myself with a simple test [1]. I see it > > > with both funcargs and fixtures, but I'm not sure if it has to do with > the > > > plugin, or the way py.test works. It does inject the value into the > test > > > method, but it doesn't rerun the fixture, so it seems like it is > caching > > > the first run of the fixture and using that on subsequent runs. > > > > > > I'm not sure if this is something that the plugin can have any effect > on, > > > or if it's just the way fixtures work. It is specified for this fixture > > > that it is scope='function', and perhaps py.test makes that happen by > > > checking the function name, which is, of course, the same for each > run. I > > > did try removing the scope argument from the fixture but that had no > > > effect. > > > > > > Do you have any thoughts about this, @klrmn >? > > > > > > [1] https://gist.github.com/bobsilverberg/5385035 > > > > > > ? > > > Reply to this email directly or view it on GitHub< > https://github.com/klrmn/pytest-rerunfailures/issues/10#issuecomment-16363644 > > > > > . > > > > > > _______________________________________________ > > Pytest-dev mailing list > > Pytest-dev at python.org > > http://mail.python.org/mailman/listinfo/pytest-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From holger at merlinux.eu Tue Apr 23 09:16:41 2013 From: holger at merlinux.eu (holger krekel) Date: Tue, 23 Apr 2013 07:16:41 +0000 Subject: [pytest-dev] [pytest-rerunfailures] pytest-rerunfailures not using fixtures on reruns (#10) In-Reply-To: References: <20130422082733.GQ5855@merlinux.eu> Message-ID: <20130423071641.GR5855@merlinux.eu> Hey Leah, On Mon, Apr 22, 2013 at 15:08 -0700, Leah Klearman wrote: > Hey Holger, > > Thank you for the timely patch. I haven't seen Bob online today, but > hopefully he'll be able to test it very soon. > > A word of warning: your calling of runtestprotocol() is not quite right > > and might lead to problems. "nextitem" should really be the item that > > is going to be run next. So if you re-run three times the first two > > invocations should have nextitem=item. > > > I agree that that would be ideal, but it only re-runs the test if it fails, > so I > would have to know the future in order to know what value to send nextitem=. > Since hopefully most tests pass the first try, sending nextitem instead of > item seems to be the better answer. FYI pytest perform teardown/finalization according to nextitem: if your next item uses different fixtures, the "item" ones may be torn down (it also depends on the caching scope of the fixtures). Do you generally want to re-run the setup or could you also settle on just re-running the "call" phase of a test? The latter one would be easier and probably not disrupt fixture mechanics. best, holger > Thanks! > -Leah > > > On Mon, Apr 22, 2013 at 1:27 AM, holger krekel wrote: > > > Hi Leah, Bob, > > > > On Fri, Apr 19, 2013 at 20:55 -0700, Leah Klearman wrote: > > > Hi Holger and other py.test mavens, > > > > > > Bob has reported a problem with my py.test plugin pytest-rerunfailures > > [1] > > > not re-running the setup before rerunning the test. > > > > > > Looking at my code, it has pytest_runtest_protocol() [2] looping > > > on _pytest.runner.runtestprotocol() [3], which in turn runs the setup, > > the > > > test, and the teardown. > > > > > > [1] https://github.com/klrmn/pytest-rerunfailures > > > [2] > > > > > https://github.com/klrmn/pytest-rerunfailures/blob/master/rerunfailures/plugin.py#L46 > > > [3] > > > > > https://bitbucket.org/hpk42/pytest/src/fdc28ac2029f1c0be1dac4991c2f1b014c39a03f/_pytest/runner.py?at=default#cl-65 > > > > > > I haven't taken the hours needed to get my head fully into py.test plugin > > > development mode, but I'm not sure I can implement a fix at my layer. > > > > > > > > > I'm hoping someone here will have some insight. > > > > It's a bit intricate. A function item keeps around some fixture state > > and was so far not intended to be run multiple times. I went ahead and > > tried to improve the behaviour to better allow re-running. Please try > > with > > > > pip install -i http://pypi.testrun.org -U pytest > > > > which should give you pytest-2.3.5.dev16 at least. This is bound to be > > released soon so quick feedback is welcome. If you still have problems > > please try to send a minimal test file which shows undesired behaviour. > > > > A word of warning: your calling of runtestprotocol() is not quite right > > and might lead to problems. "nextitem" should really be the item that > > is going to be run next. So if you re-run three times the first two > > invocations should have nextitem=item. > > > > best, > > holger > > > > > > > Thanks, > > > -Leah > > > > > > > > > On Sun, Apr 14, 2013 at 6:29 PM, Bob Silverberg < > > notifications at github.com>wrote: > > > > > > > I just verified this behaviour myself with a simple test [1]. I see it > > > > with both funcargs and fixtures, but I'm not sure if it has to do with > > the > > > > plugin, or the way py.test works. It does inject the value into the > > test > > > > method, but it doesn't rerun the fixture, so it seems like it is > > caching > > > > the first run of the fixture and using that on subsequent runs. > > > > > > > > I'm not sure if this is something that the plugin can have any effect > > on, > > > > or if it's just the way fixtures work. It is specified for this fixture > > > > that it is scope='function', and perhaps py.test makes that happen by > > > > checking the function name, which is, of course, the same for each > > run. I > > > > did try removing the scope argument from the fixture but that had no > > > > effect. > > > > > > > > Do you have any thoughts about this, @klrmn > >? > > > > > > > > [1] https://gist.github.com/bobsilverberg/5385035 > > > > > > > > ? > > > > Reply to this email directly or view it on GitHub< > > https://github.com/klrmn/pytest-rerunfailures/issues/10#issuecomment-16363644 > > > > > > > . > > > > > > > > > _______________________________________________ > > > Pytest-dev mailing list > > > Pytest-dev at python.org > > > http://mail.python.org/mailman/listinfo/pytest-dev > > > > From holger at merlinux.eu Tue Apr 23 09:20:17 2013 From: holger at merlinux.eu (holger krekel) Date: Tue, 23 Apr 2013 07:20:17 +0000 Subject: [pytest-dev] Using a context manager in a funcarg/fixture In-Reply-To: References: Message-ID: <20130423072017.GS5855@merlinux.eu> Hi Brianna, On Mon, Apr 22, 2013 at 15:48 +1000, Brianna Laugher wrote: > Hi folks, > > I posted this a couple of weeks ago and would appreciate if anyone can > offer a useful answer. > > http://stackoverflow.com/questions/15801662/py-test-how-to-use-a-context-manager-in-a-funcarg-fixture > > I feel like some kind of fixture definition that involves a yield statement > could be usefulfor this? I guess some playing around with adding direct support for context managers as fixtures could be interesting. Not bound to do that anytime soon myself, though. holger > cheers > Brianna > > > -- > They've just been waiting in a mountain for the right moment: > http://modernthings.org/ > _______________________________________________ > Pytest-dev mailing list > Pytest-dev at python.org > http://mail.python.org/mailman/listinfo/pytest-dev From holger at merlinux.eu Tue Apr 23 09:27:20 2013 From: holger at merlinux.eu (holger krekel) Date: Tue, 23 Apr 2013 07:27:20 +0000 Subject: [pytest-dev] parametrize + xfail In-Reply-To: References: Message-ID: <20130423072720.GT5855@merlinux.eu> Hi Brianna, first off a request to you :) You filed https://bitbucket.org/hpk42/pytest/issue/279/ and Floris improved assertion reporting accordingly. Could you provide testing and feedback? On Mon, Apr 22, 2013 at 16:05 +1000, Brianna Laugher wrote: > Hi again :) > > A common problem I have is that I have a test that is parametrized with > py.test.mark.parametrize, I discover a bug, I want to add another test case > for that bug and mark it as xfail. > > I have done something based on > http://stackoverflow.com/questions/12364801/how-to-mark-some-generated-tests-as-xfail-skip > for pytest_generate type functions, but it is awkward to do without > disrupting the existing cases, and somewhat overkill in cases where the > xfail is likely to be resolved (the bug is fixed) soon. > > With py.test.mark.parametrize, I notice > https://bitbucket.org/hpk42/pytest/issue/124/allow-individual-parametrized-values-to-be > > I was just thinking about this now and I wonder if it is possible to build > a decorator like this? > > @py.test.mark.parametrizexfail('duration', 'expectedBrackets'), [ > (7, [None, None, None, 7]), > (19, [None, 7, 6, 6]), > ]) > @py.test.mark.parametrize(('duration', 'expectedBrackets'), [ > (24, [6, 6, 6, 6]), > (23, [6, 6, 6, 6]), > (25, [6, 6, 6, 6]), > ]) > > So 5 cases would be fed into the test, with only the first two marked as > xfail. I guess this is possible. I'd probably prefer something like: @py.test.mark.parametrize(('duration', 'expectedBrackets'), [ pytest.mark.xfail((7, [None, None, None, 7])), pytest.mark.xfail((19, [None, 7, 6, 6])), (24, [6, 6, 6, 6]), (23, [6, 6, 6, 6]), (25, [6, 6, 6, 6]), and is a bit easier to switch between xfail and not. If you like that as well please open an issue and at best try to come up with a patch :) > Also while I'm at it, it could be good for pytest to issue a warning if > someone uses a mark called parameterize, parametrise or parameterise, > because I've been caught pondering why a mark wasn't working properly at > least once :) Did you try running with "py.test --strict" (which you can make a general default through "addopts" in a pytest config file)? It bails out if you use non-registered markers. cheers, holger > cheers > Brianna > > > > -- > They've just been waiting in a mountain for the right moment: > http://modernthings.org/ > _______________________________________________ > Pytest-dev mailing list > Pytest-dev at python.org > http://mail.python.org/mailman/listinfo/pytest-dev From Ronny.Pfannschmidt at gmx.de Tue Apr 23 09:33:50 2013 From: Ronny.Pfannschmidt at gmx.de (Ronny Pfannschmidt) Date: Tue, 23 Apr 2013 09:33:50 +0200 Subject: [pytest-dev] Using a context manager in a funcarg/fixture In-Reply-To: <20130423072017.GS5855@merlinux.eu> References: <20130423072017.GS5855@merlinux.eu> Message-ID: <5176395E.7070101@gmx.de> Hi Holger, Brianna, there is https://github.com/pelme/pytest-contextfixture which seems to handle it for the time being -- Ronny On 04/23/2013 09:20 AM, holger krekel wrote: > Hi Brianna, > > On Mon, Apr 22, 2013 at 15:48 +1000, Brianna Laugher wrote: >> Hi folks, >> >> I posted this a couple of weeks ago and would appreciate if anyone can >> offer a useful answer. >> >> http://stackoverflow.com/questions/15801662/py-test-how-to-use-a-context-manager-in-a-funcarg-fixture >> >> I feel like some kind of fixture definition that involves a yield statement >> could be usefulfor this? > > I guess some playing around with adding direct support for context > managers as fixtures could be interesting. Not bound to do that anytime soon > myself, though. > > holger > >> cheers >> Brianna >> >> >> -- >> They've just been waiting in a mountain for the right moment: >> http://modernthings.org/ > >> _______________________________________________ >> Pytest-dev mailing list >> Pytest-dev at python.org >> http://mail.python.org/mailman/listinfo/pytest-dev > > _______________________________________________ > Pytest-dev mailing list > Pytest-dev at python.org > http://mail.python.org/mailman/listinfo/pytest-dev From holger at merlinux.eu Tue Apr 23 09:39:23 2013 From: holger at merlinux.eu (holger krekel) Date: Tue, 23 Apr 2013 07:39:23 +0000 Subject: [pytest-dev] Using a context manager in a funcarg/fixture In-Reply-To: <20130423072017.GS5855@merlinux.eu> References: <20130423072017.GS5855@merlinux.eu> Message-ID: <20130423073923.GU5855@merlinux.eu> Hi Brianna, all, On Tue, Apr 23, 2013 at 07:20 +0000, holger krekel wrote: > Hi Brianna, > > On Mon, Apr 22, 2013 at 15:48 +1000, Brianna Laugher wrote: > > Hi folks, > > > > I posted this a couple of weeks ago and would appreciate if anyone can > > offer a useful answer. > > > > http://stackoverflow.com/questions/15801662/py-test-how-to-use-a-context-manager-in-a-funcarg-fixture > > > > I feel like some kind of fixture definition that involves a yield statement > > could be usefulfor this? > > I guess some playing around with adding direct support for context > managers as fixtures could be interesting. Not bound to do that anytime soon > myself, though. for clarification why i think this needs more discussion and playing around: what exactly should a fixturemanagers __exit__() see as exception value? If we have multiple "contextmanager" fixtures like this: def test_fixtures(fix1, fix2, fix3, ...): and fix2's setup fails, should fix1's __exit__ see that exception? Or should only exceptions from the actual test body be seen and then be seen by each fixN's __exit__ repeatedly? What about fixtures that have caching scopes higher than "function"? Should they ever receive exceptions in their __exit__? best, holger > holger > > > cheers > > Brianna > > > > > > -- > > They've just been waiting in a mountain for the right moment: > > http://modernthings.org/ > > > _______________________________________________ > > Pytest-dev mailing list > > Pytest-dev at python.org > > http://mail.python.org/mailman/listinfo/pytest-dev > > _______________________________________________ > Pytest-dev mailing list > Pytest-dev at python.org > http://mail.python.org/mailman/listinfo/pytest-dev > From brianna.laugher at gmail.com Tue Apr 23 09:40:56 2013 From: brianna.laugher at gmail.com (Brianna Laugher) Date: Tue, 23 Apr 2013 17:40:56 +1000 Subject: [pytest-dev] Using a context manager in a funcarg/fixture In-Reply-To: <5176395E.7070101@gmx.de> References: <20130423072017.GS5855@merlinux.eu> <5176395E.7070101@gmx.de> Message-ID: On 23 April 2013 17:33, Ronny Pfannschmidt wrote: > Hi Holger, Brianna, > > there is https://github.com/pelme/**pytest-contextfixture > which seems to handle it for the time being Hi Ronny, I didn't know about that, but I think this TODO is telling: https://github.com/pelme/pytest-contextfixture/blob/master/pytest_contextfixture.py#L17 (ie, it's not really solved) thanks Brianna -- They've just been waiting in a mountain for the right moment: http://modernthings.org/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From brianna.laugher at gmail.com Tue Apr 23 09:46:09 2013 From: brianna.laugher at gmail.com (Brianna Laugher) Date: Tue, 23 Apr 2013 17:46:09 +1000 Subject: [pytest-dev] Using a context manager in a funcarg/fixture In-Reply-To: <20130423073923.GU5855@merlinux.eu> References: <20130423072017.GS5855@merlinux.eu> <20130423073923.GU5855@merlinux.eu> Message-ID: On 23 April 2013 17:39, holger krekel wrote: > for clarification why i think this needs more discussion and playing > around: what exactly should a fixturemanagers __exit__() see > as exception value? If we have multiple "contextmanager" fixtures like > this: > > def test_fixtures(fix1, fix2, fix3, ...): > > and fix2's setup fails, should fix1's __exit__ see that exception? > Or should only exceptions from the actual test body be seen and then > be seen by each fixN's __exit__ repeatedly? > > What about fixtures that have caching scopes higher than "function"? > Should they ever receive exceptions in their __exit__? > Ah right... yes. I see how that is a problem. Maybe then I need to refactor my context manager so it can used more easily as a funcarg. :) Brianna -- They've just been waiting in a mountain for the right moment: http://modernthings.org/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From brianna.laugher at gmail.com Tue Apr 23 11:06:26 2013 From: brianna.laugher at gmail.com (Brianna Laugher) Date: Tue, 23 Apr 2013 19:06:26 +1000 Subject: [pytest-dev] parametrize + xfail In-Reply-To: <20130423072720.GT5855@merlinux.eu> References: <20130423072720.GT5855@merlinux.eu> Message-ID: On 23 April 2013 17:27, holger krekel wrote: > Hi Brianna, > > first off a request to you :) > > You filed https://bitbucket.org/hpk42/pytest/issue/279/ and Floris > improved assertion reporting accordingly. Could you provide testing > and feedback? > I will definitely do that, it escaped my attention somehow! I guess this is possible. I'd probably prefer something like: > @py.test.mark.parametrize(('duration', 'expectedBrackets'), [ > pytest.mark.xfail((7, [None, None, None, 7])), > pytest.mark.xfail((19, [None, 7, 6, 6])), > (24, [6, 6, 6, 6]), > (23, [6, 6, 6, 6]), > (25, [6, 6, 6, 6]), > > and is a bit easier to switch between xfail and not. If you like > that as well please open an issue and at best try to come up with > a patch :) > That is fine by me, I will try to see what I can do. :) > > > Also while I'm at it, it could be good for pytest to issue a warning if > > someone uses a mark called parameterize, parametrise or parameterise, > > because I've been caught pondering why a mark wasn't working properly at > > least once :) > > Did you try running with "py.test --strict" (which you can make a general > default through "addopts" in a pytest config file)? It bails out if you > use non-registered markers. > We don't use registered markers (partly because we use marks to refer to bug tracker items), so that wouldn't help. But it occurs to me there would be a hook somewhere where I could inspect the marks and raise some kind of alert for myself, don't need pytest to do it. I will look into that. :) thanks! Brianna -- They've just been waiting in a mountain for the right moment: http://modernthings.org/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From holger at merlinux.eu Tue Apr 23 12:25:55 2013 From: holger at merlinux.eu (holger krekel) Date: Tue, 23 Apr 2013 10:25:55 +0000 Subject: [pytest-dev] parametrize + xfail In-Reply-To: References: <20130423072720.GT5855@merlinux.eu> Message-ID: <20130423102555.GV5855@merlinux.eu> On Tue, Apr 23, 2013 at 19:06 +1000, Brianna Laugher wrote: > On 23 April 2013 17:27, holger krekel wrote: > > > Hi Brianna, > > > > first off a request to you :) > > > > You filed https://bitbucket.org/hpk42/pytest/issue/279/ and Floris > > improved assertion reporting accordingly. Could you provide testing > > and feedback? > > > > I will definitely do that, it escaped my attention somehow! > thanks. > I guess this is possible. I'd probably prefer something like: > > > @py.test.mark.parametrize(('duration', 'expectedBrackets'), [ > > pytest.mark.xfail((7, [None, None, None, 7])), > > pytest.mark.xfail((19, [None, 7, 6, 6])), > > (24, [6, 6, 6, 6]), > > (23, [6, 6, 6, 6]), > > (25, [6, 6, 6, 6]), > > > > and is a bit easier to switch between xfail and not. If you like > > that as well please open an issue and at best try to come up with > > a patch :) > > > > That is fine by me, I will try to see what I can do. :) The parametrization code is in _pytest/python.py and it's a bit tricky because of its support for old-style (addcall) and new-style (parametrize) parametrization and because it integrates with the general fixture mechanism. IOW, please don't feel bad if you get lost in that code :) > > > > > > Also while I'm at it, it could be good for pytest to issue a warning if > > > someone uses a mark called parameterize, parametrise or parameterise, > > > because I've been caught pondering why a mark wasn't working properly at > > > least once :) > > > > Did you try running with "py.test --strict" (which you can make a general > > default through "addopts" in a pytest config file)? It bails out if you > > use non-registered markers. > > > > We don't use registered markers (partly because we use marks to refer to > bug tracker items), so that wouldn't help. But it occurs to me there would > be a hook somewhere where I could inspect the marks and raise some kind of > alert for myself, don't need pytest to do it. I will look into that. :) I recommend thinking about a way to use strict registration. We could think about allowing wild-cards in mark registrations. So if you have "bugNNN" you could register "bug*". cheers, holger > thanks! > Brianna > > > -- > They've just been waiting in a mountain for the right moment: > http://modernthings.org/ > _______________________________________________ > Pytest-dev mailing list > Pytest-dev at python.org > http://mail.python.org/mailman/listinfo/pytest-dev From lklrmn at gmail.com Tue Apr 23 18:56:55 2013 From: lklrmn at gmail.com (Leah Klearman) Date: Tue, 23 Apr 2013 09:56:55 -0700 Subject: [pytest-dev] [pytest-rerunfailures] pytest-rerunfailures not using fixtures on reruns (#10) In-Reply-To: <20130423071641.GR5855@merlinux.eu> References: <20130422082733.GQ5855@merlinux.eu> <20130423071641.GR5855@merlinux.eu> Message-ID: Holger, The only reason I can think of why py.test would perform teardown / finalization differently depending on the nextitem is if class, module, and package level fixtures are being used. If only method level fixtures are used, then they should always get created and always get torn down for each test case. Is that not how it works? We can't re-run just the 'call' phase because the failure of the previous 'call' phase may have left behind a partial situation (see automated testing before we had setup and teardown and fixtures). -Leah On Tue, Apr 23, 2013 at 12:16 AM, holger krekel wrote: > Hey Leah, > > On Mon, Apr 22, 2013 at 15:08 -0700, Leah Klearman wrote: > > Hey Holger, > > > > Thank you for the timely patch. I haven't seen Bob online today, but > > hopefully he'll be able to test it very soon. > > > > A word of warning: your calling of runtestprotocol() is not quite right > > > and might lead to problems. "nextitem" should really be the item that > > > is going to be run next. So if you re-run three times the first two > > > invocations should have nextitem=item. > > > > > > I agree that that would be ideal, but it only re-runs the test if it > fails, > > so I > > would have to know the future in order to know what value to send > nextitem=. > > Since hopefully most tests pass the first try, sending nextitem instead > of > > item seems to be the better answer. > > FYI pytest perform teardown/finalization according to nextitem: if your > next item uses different fixtures, the "item" ones may be torn down > (it also depends on the caching scope of the fixtures). > > Do you generally want to re-run the setup or could you also settle > on just re-running the "call" phase of a test? The latter one would > be easier and probably not disrupt fixture mechanics. > > best, > holger > > > Thanks! > > -Leah > > > > > > On Mon, Apr 22, 2013 at 1:27 AM, holger krekel > wrote: > > > > > Hi Leah, Bob, > > > > > > On Fri, Apr 19, 2013 at 20:55 -0700, Leah Klearman wrote: > > > > Hi Holger and other py.test mavens, > > > > > > > > Bob has reported a problem with my py.test plugin > pytest-rerunfailures > > > [1] > > > > not re-running the setup before rerunning the test. > > > > > > > > Looking at my code, it has pytest_runtest_protocol() [2] looping > > > > on _pytest.runner.runtestprotocol() [3], which in turn runs the > setup, > > > the > > > > test, and the teardown. > > > > > > > > [1] https://github.com/klrmn/pytest-rerunfailures > > > > [2] > > > > > > > > https://github.com/klrmn/pytest-rerunfailures/blob/master/rerunfailures/plugin.py#L46 > > > > [3] > > > > > > > > https://bitbucket.org/hpk42/pytest/src/fdc28ac2029f1c0be1dac4991c2f1b014c39a03f/_pytest/runner.py?at=default#cl-65 > > > > > > > > I haven't taken the hours needed to get my head fully into py.test > plugin > > > > development mode, but I'm not sure I can implement a fix at my layer. > > > > > > > > > > > > I'm hoping someone here will have some insight. > > > > > > It's a bit intricate. A function item keeps around some fixture state > > > and was so far not intended to be run multiple times. I went ahead and > > > tried to improve the behaviour to better allow re-running. Please try > > > with > > > > > > pip install -i http://pypi.testrun.org -U pytest > > > > > > which should give you pytest-2.3.5.dev16 at least. This is bound to be > > > released soon so quick feedback is welcome. If you still have problems > > > please try to send a minimal test file which shows undesired behaviour. > > > > > > A word of warning: your calling of runtestprotocol() is not quite right > > > and might lead to problems. "nextitem" should really be the item that > > > is going to be run next. So if you re-run three times the first two > > > invocations should have nextitem=item. > > > > > > best, > > > holger > > > > > > > > > > Thanks, > > > > -Leah > > > > > > > > > > > > On Sun, Apr 14, 2013 at 6:29 PM, Bob Silverberg < > > > notifications at github.com>wrote: > > > > > > > > > I just verified this behaviour myself with a simple test [1]. I > see it > > > > > with both funcargs and fixtures, but I'm not sure if it has to do > with > > > the > > > > > plugin, or the way py.test works. It does inject the value into the > > > test > > > > > method, but it doesn't rerun the fixture, so it seems like it is > > > caching > > > > > the first run of the fixture and using that on subsequent runs. > > > > > > > > > > I'm not sure if this is something that the plugin can have any > effect > > > on, > > > > > or if it's just the way fixtures work. It is specified for this > fixture > > > > > that it is scope='function', and perhaps py.test makes that happen > by > > > > > checking the function name, which is, of course, the same for each > > > run. I > > > > > did try removing the scope argument from the fixture but that had > no > > > > > effect. > > > > > > > > > > Do you have any thoughts about this, @klrmn < > https://github.com/klrmn > > > >? > > > > > > > > > > [1] https://gist.github.com/bobsilverberg/5385035 > > > > > > > > > > ? > > > > > Reply to this email directly or view it on GitHub< > > > > https://github.com/klrmn/pytest-rerunfailures/issues/10#issuecomment-16363644 > > > > > > > > > . > > > > > > > > > > > > _______________________________________________ > > > > Pytest-dev mailing list > > > > Pytest-dev at python.org > > > > http://mail.python.org/mailman/listinfo/pytest-dev > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From holger at merlinux.eu Tue Apr 23 22:30:10 2013 From: holger at merlinux.eu (holger krekel) Date: Tue, 23 Apr 2013 20:30:10 +0000 Subject: [pytest-dev] [pytest-rerunfailures] pytest-rerunfailures not using fixtures on reruns (#10) In-Reply-To: References: <20130422082733.GQ5855@merlinux.eu> <20130423071641.GR5855@merlinux.eu> Message-ID: <20130423203010.GZ5855@merlinux.eu> On Tue, Apr 23, 2013 at 09:56 -0700, Leah Klearman wrote: > Holger, > > The only reason I can think of why py.test would perform teardown / > finalization differently depending on the nextitem is if class, module, and > package level fixtures are being used. Yes, i tried to say something equivalent in my previous mail. > If only method level fixtures are > used, then they should always get created and always get torn down for each > test case. Is that not how it works? Effectively it should be like this and probably independently of the value of nextitem. If you find issues let us know. > We can't re-run just the 'call' phase because the failure of the previous > 'call' phase may have left behind a partial situation (see automated > testing before we had setup and teardown and fixtures). Sure, good point. cheers, holger > -Leah > > > On Tue, Apr 23, 2013 at 12:16 AM, holger krekel wrote: > > > Hey Leah, > > > > On Mon, Apr 22, 2013 at 15:08 -0700, Leah Klearman wrote: > > > Hey Holger, > > > > > > Thank you for the timely patch. I haven't seen Bob online today, but > > > hopefully he'll be able to test it very soon. > > > > > > A word of warning: your calling of runtestprotocol() is not quite right > > > > and might lead to problems. "nextitem" should really be the item that > > > > is going to be run next. So if you re-run three times the first two > > > > invocations should have nextitem=item. > > > > > > > > > I agree that that would be ideal, but it only re-runs the test if it > > fails, > > > so I > > > would have to know the future in order to know what value to send > > nextitem=. > > > Since hopefully most tests pass the first try, sending nextitem instead > > of > > > item seems to be the better answer. > > > > FYI pytest perform teardown/finalization according to nextitem: if your > > next item uses different fixtures, the "item" ones may be torn down > > (it also depends on the caching scope of the fixtures). > > > > Do you generally want to re-run the setup or could you also settle > > on just re-running the "call" phase of a test? The latter one would > > be easier and probably not disrupt fixture mechanics. > > > > best, > > holger > > > > > Thanks! > > > -Leah > > > > > > > > > On Mon, Apr 22, 2013 at 1:27 AM, holger krekel > > wrote: > > > > > > > Hi Leah, Bob, > > > > > > > > On Fri, Apr 19, 2013 at 20:55 -0700, Leah Klearman wrote: > > > > > Hi Holger and other py.test mavens, > > > > > > > > > > Bob has reported a problem with my py.test plugin > > pytest-rerunfailures > > > > [1] > > > > > not re-running the setup before rerunning the test. > > > > > > > > > > Looking at my code, it has pytest_runtest_protocol() [2] looping > > > > > on _pytest.runner.runtestprotocol() [3], which in turn runs the > > setup, > > > > the > > > > > test, and the teardown. > > > > > > > > > > [1] https://github.com/klrmn/pytest-rerunfailures > > > > > [2] > > > > > > > > > > > https://github.com/klrmn/pytest-rerunfailures/blob/master/rerunfailures/plugin.py#L46 > > > > > [3] > > > > > > > > > > > https://bitbucket.org/hpk42/pytest/src/fdc28ac2029f1c0be1dac4991c2f1b014c39a03f/_pytest/runner.py?at=default#cl-65 > > > > > > > > > > I haven't taken the hours needed to get my head fully into py.test > > plugin > > > > > development mode, but I'm not sure I can implement a fix at my layer. > > > > > > > > > > > > > > > I'm hoping someone here will have some insight. > > > > > > > > It's a bit intricate. A function item keeps around some fixture state > > > > and was so far not intended to be run multiple times. I went ahead and > > > > tried to improve the behaviour to better allow re-running. Please try > > > > with > > > > > > > > pip install -i http://pypi.testrun.org -U pytest > > > > > > > > which should give you pytest-2.3.5.dev16 at least. This is bound to be > > > > released soon so quick feedback is welcome. If you still have problems > > > > please try to send a minimal test file which shows undesired behaviour. > > > > > > > > A word of warning: your calling of runtestprotocol() is not quite right > > > > and might lead to problems. "nextitem" should really be the item that > > > > is going to be run next. So if you re-run three times the first two > > > > invocations should have nextitem=item. > > > > > > > > best, > > > > holger > > > > > > > > > > > > > Thanks, > > > > > -Leah > > > > > > > > > > > > > > > On Sun, Apr 14, 2013 at 6:29 PM, Bob Silverberg < > > > > notifications at github.com>wrote: > > > > > > > > > > > I just verified this behaviour myself with a simple test [1]. I > > see it > > > > > > with both funcargs and fixtures, but I'm not sure if it has to do > > with > > > > the > > > > > > plugin, or the way py.test works. It does inject the value into the > > > > test > > > > > > method, but it doesn't rerun the fixture, so it seems like it is > > > > caching > > > > > > the first run of the fixture and using that on subsequent runs. > > > > > > > > > > > > I'm not sure if this is something that the plugin can have any > > effect > > > > on, > > > > > > or if it's just the way fixtures work. It is specified for this > > fixture > > > > > > that it is scope='function', and perhaps py.test makes that happen > > by > > > > > > checking the function name, which is, of course, the same for each > > > > run. I > > > > > > did try removing the scope argument from the fixture but that had > > no > > > > > > effect. > > > > > > > > > > > > Do you have any thoughts about this, @klrmn < > > https://github.com/klrmn > > > > >? > > > > > > > > > > > > [1] https://gist.github.com/bobsilverberg/5385035 > > > > > > > > > > > > ? > > > > > > Reply to this email directly or view it on GitHub< > > > > > > https://github.com/klrmn/pytest-rerunfailures/issues/10#issuecomment-16363644 > > > > > > > > > > > . > > > > > > > > > > > > > > > _______________________________________________ > > > > > Pytest-dev mailing list > > > > > Pytest-dev at python.org > > > > > http://mail.python.org/mailman/listinfo/pytest-dev > > > > > > > > > > From lklrmn at gmail.com Tue Apr 23 23:28:15 2013 From: lklrmn at gmail.com (Leah Klearman) Date: Tue, 23 Apr 2013 14:28:15 -0700 Subject: [pytest-dev] [pytest-rerunfailures] pytest-rerunfailures not using fixtures on reruns (#10) In-Reply-To: <20130423203010.GZ5855@merlinux.eu> References: <20130422082733.GQ5855@merlinux.eu> <20130423071641.GR5855@merlinux.eu> <20130423203010.GZ5855@merlinux.eu> Message-ID: Hi Holger, Bob appears to be on vacation, so I asked Stephen to run the job in it's natural environment. The run with the dev version of pytest and my plugin enabled had the same two failures as the project had with the previous release of pytest and no plugin. An analysis of whether the failures actually act differently will have to await Bob returning from vacation. For our reference, the jenkins config: http://pastie.org/7705026 Wish I could have given you better feedback on the patch. Please use your discretion as to whether or not to include it in the upcoming release. Thanks for your help. -Leah On Tue, Apr 23, 2013 at 1:30 PM, holger krekel wrote: > On Tue, Apr 23, 2013 at 09:56 -0700, Leah Klearman wrote: > > Holger, > > > > The only reason I can think of why py.test would perform teardown / > > finalization differently depending on the nextitem is if class, module, > and > > package level fixtures are being used. > > Yes, i tried to say something equivalent in my previous mail. > > > If only method level fixtures are > > used, then they should always get created and always get torn down for > each > > test case. Is that not how it works? > > Effectively it should be like this and probably independently of > the value of nextitem. If you find issues let us know. > > > We can't re-run just the 'call' phase because the failure of the previous > > 'call' phase may have left behind a partial situation (see automated > > testing before we had setup and teardown and fixtures). > > Sure, good point. > > cheers, > holger > > > -Leah > > > > > > On Tue, Apr 23, 2013 at 12:16 AM, holger krekel > wrote: > > > > > Hey Leah, > > > > > > On Mon, Apr 22, 2013 at 15:08 -0700, Leah Klearman wrote: > > > > Hey Holger, > > > > > > > > Thank you for the timely patch. I haven't seen Bob online today, but > > > > hopefully he'll be able to test it very soon. > > > > > > > > A word of warning: your calling of runtestprotocol() is not quite > right > > > > > and might lead to problems. "nextitem" should really be the item > that > > > > > is going to be run next. So if you re-run three times the first > two > > > > > invocations should have nextitem=item. > > > > > > > > > > > > I agree that that would be ideal, but it only re-runs the test if it > > > fails, > > > > so I > > > > would have to know the future in order to know what value to send > > > nextitem=. > > > > Since hopefully most tests pass the first try, sending nextitem > instead > > > of > > > > item seems to be the better answer. > > > > > > FYI pytest perform teardown/finalization according to nextitem: if > your > > > next item uses different fixtures, the "item" ones may be torn down > > > (it also depends on the caching scope of the fixtures). > > > > > > Do you generally want to re-run the setup or could you also settle > > > on just re-running the "call" phase of a test? The latter one would > > > be easier and probably not disrupt fixture mechanics. > > > > > > best, > > > holger > > > > > > > Thanks! > > > > -Leah > > > > > > > > > > > > On Mon, Apr 22, 2013 at 1:27 AM, holger krekel > > > wrote: > > > > > > > > > Hi Leah, Bob, > > > > > > > > > > On Fri, Apr 19, 2013 at 20:55 -0700, Leah Klearman wrote: > > > > > > Hi Holger and other py.test mavens, > > > > > > > > > > > > Bob has reported a problem with my py.test plugin > > > pytest-rerunfailures > > > > > [1] > > > > > > not re-running the setup before rerunning the test. > > > > > > > > > > > > Looking at my code, it has pytest_runtest_protocol() [2] looping > > > > > > on _pytest.runner.runtestprotocol() [3], which in turn runs the > > > setup, > > > > > the > > > > > > test, and the teardown. > > > > > > > > > > > > [1] https://github.com/klrmn/pytest-rerunfailures > > > > > > [2] > > > > > > > > > > > > > > > https://github.com/klrmn/pytest-rerunfailures/blob/master/rerunfailures/plugin.py#L46 > > > > > > [3] > > > > > > > > > > > > > > > https://bitbucket.org/hpk42/pytest/src/fdc28ac2029f1c0be1dac4991c2f1b014c39a03f/_pytest/runner.py?at=default#cl-65 > > > > > > > > > > > > I haven't taken the hours needed to get my head fully into > py.test > > > plugin > > > > > > development mode, but I'm not sure I can implement a fix at my > layer. > > > > > > > > > > > > > > > > > > I'm hoping someone here will have some insight. > > > > > > > > > > It's a bit intricate. A function item keeps around some fixture > state > > > > > and was so far not intended to be run multiple times. I went > ahead and > > > > > tried to improve the behaviour to better allow re-running. > Please try > > > > > with > > > > > > > > > > pip install -i http://pypi.testrun.org -U pytest > > > > > > > > > > which should give you pytest-2.3.5.dev16 at least. This is bound > to be > > > > > released soon so quick feedback is welcome. If you still have > problems > > > > > please try to send a minimal test file which shows undesired > behaviour. > > > > > > > > > > A word of warning: your calling of runtestprotocol() is not quite > right > > > > > and might lead to problems. "nextitem" should really be the item > that > > > > > is going to be run next. So if you re-run three times the first > two > > > > > invocations should have nextitem=item. > > > > > > > > > > best, > > > > > holger > > > > > > > > > > > > > > > > Thanks, > > > > > > -Leah > > > > > > > > > > > > > > > > > > On Sun, Apr 14, 2013 at 6:29 PM, Bob Silverberg < > > > > > notifications at github.com>wrote: > > > > > > > > > > > > > I just verified this behaviour myself with a simple test [1]. I > > > see it > > > > > > > with both funcargs and fixtures, but I'm not sure if it has to > do > > > with > > > > > the > > > > > > > plugin, or the way py.test works. It does inject the value > into the > > > > > test > > > > > > > method, but it doesn't rerun the fixture, so it seems like it > is > > > > > caching > > > > > > > the first run of the fixture and using that on subsequent runs. > > > > > > > > > > > > > > I'm not sure if this is something that the plugin can have any > > > effect > > > > > on, > > > > > > > or if it's just the way fixtures work. It is specified for this > > > fixture > > > > > > > that it is scope='function', and perhaps py.test makes that > happen > > > by > > > > > > > checking the function name, which is, of course, the same for > each > > > > > run. I > > > > > > > did try removing the scope argument from the fixture but that > had > > > no > > > > > > > effect. > > > > > > > > > > > > > > Do you have any thoughts about this, @klrmn < > > > https://github.com/klrmn > > > > > >? > > > > > > > > > > > > > > [1] https://gist.github.com/bobsilverberg/5385035 > > > > > > > > > > > > > > ? > > > > > > > Reply to this email directly or view it on GitHub< > > > > > > > > > https://github.com/klrmn/pytest-rerunfailures/issues/10#issuecomment-16363644 > > > > > > > > > > > > > . > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > > Pytest-dev mailing list > > > > > > Pytest-dev at python.org > > > > > > http://mail.python.org/mailman/listinfo/pytest-dev > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From holger at merlinux.eu Tue Apr 30 16:39:15 2013 From: holger at merlinux.eu (holger krekel) Date: Tue, 30 Apr 2013 14:39:15 +0000 Subject: [pytest-dev] pytest-2.3.5: bug fixes and little improvements Message-ID: <20130430143915.GH5855@merlinux.eu> pytest-2.3.5: bug fixes and little improvements =========================================================================== pytest-2.3.5 is a maintenance release with many bug fixes and little improvements. See the changelog below for details. No backward compatibility issues are foreseen and all plugins which worked with the prior version are expected to work unmodified. Speaking of which, a few interesting new plugins saw the light last month: - pytest-instafail: show failure information while tests are running - pytest-qt: testing of GUI applications written with QT/Pyside - pytest-xprocess: managing external processes across test runs - pytest-random: randomize test ordering And several others like pytest-django saw maintenance releases. For a more complete list, check out https://pypi.python.org/pypi?%3Aaction=search&term=pytest&submit=search. For general information see: http://pytest.org/ To install or upgrade pytest: pip install -U pytest # or easy_install -U pytest Particular thanks to Floris, Ronny, Benjamin and the many bug reporters and fix providers. may the fixtures be with you, holger krekel Changes between 2.3.4 and 2.3.5 ----------------------------------- - never consider a fixture function for test function collection - allow re-running of test items / helps to fix pytest-reruntests plugin and also help to keep less fixture/resource references alive - put captured stdout/stderr into junitxml output even for passing tests (thanks Adam Goucher) - Issue 265 - integrate nose setup/teardown with setupstate so it doesnt try to teardown if it did not setup - issue 271 - dont write junitxml on slave nodes - Issue 274 - dont try to show full doctest example when doctest does not know the example location - issue 280 - disable assertion rewriting on buggy CPython 2.6.0 - inject "getfixture()" helper to retrieve fixtures from doctests, thanks Andreas Zeidler - issue 259 - when assertion rewriting, be consistent with the default source encoding of ASCII on Python 2 - issue 251 - report a skip instead of ignoring classes with init - issue250 unicode/str mixes in parametrization names and values now works - issue257, assertion-triggered compilation of source ending in a comment line doesn't blow up in python2.5 (fixed through py>=1.4.13.dev6) - fix --genscript option to generate standalone scripts that also work with python3.3 (importer ordering) - issue171 - in assertion rewriting, show the repr of some global variables - fix option help for "-k" - move long description of distribution into README.rst - improve docstring for metafunc.parametrize() - fix bug where using capsys with pytest.set_trace() in a test function would break when looking at capsys.readouterr() - allow to specify prefixes starting with "_" when customizing python_functions test discovery. (thanks Graham Horler) - improve PYTEST_DEBUG tracing output by puting extra data on a new lines with additional indent - ensure OutcomeExceptions like skip/fail have initialized exception attributes - issue 260 - don't use nose special setup on plain unittest cases - fix issue134 - print the collect errors that prevent running specified test items - fix issue266 - accept unicode in MarkEvaluator expressions From holger at merlinux.eu Tue Apr 30 23:48:10 2013 From: holger at merlinux.eu (holger krekel) Date: Tue, 30 Apr 2013 21:48:10 +0000 Subject: [pytest-dev] devpi-server: lightning fast pypi.python.org proxy (0.7 initial release) Message-ID: <20130430214810.GL5855@merlinux.eu> devpi-server: lightning-fast pypi.python.org proxy (0.7 initial) ================================================================= This is the initial release of devpi-server, an easy-to-use caching proxy server for pypi.python.org, providing fast and reliable installs when used by pip or easy_install. devpi-server offers features not found in other PyPI proxy servers: - transparent caching of pypi.python.org index and release files on first access, including indexes and files from 3rd party sites. - pip/easy_install/buildout are shielded from the typical client-side crawling, thus providing lightning-fast and reliable installation (on second access of a package). - ``devpi-server`` moreover automatically updates its main index cache using pypi's changelog protocol, making sure you'll always see an up-to-date view of what's available. devpi-server is designed to satisfy all needs arising from pip/easy_install installation operations and can thus act as the sole entry point for all package installation interactions. It will manage all outbound traffic for installing packages. Getting started ---------------------------- Simply install ``devpi-server`` via for example:: pip install devpi-server # or easy_install devpi-server Make sure you have the ``redis-server`` binary available and issue:: devpi-server after which a http server is running on ``localhost:3141`` and you can use the following index url with pip or easy_install:: pip install -i http://localhost:3141/ext/pypi/simple/ ... easy_install -i http://localhost:3141/ext/pypi/simple/ ... To avoid having to re-type the URL, you can configure pip by either setting the environment variable ``PIP_INDEX_URL`` to ``http://localhost:3141/ext/pypi/simple/`` or by putting an according entry in your ``$HOME/.pip/pip.conf`` (posix) or ``$HOME/pip/pip.conf`` (windows):: [global] index-url == http://localhost:3141/ext/pypi/simple/ Example timing ---------------- Here is a little screen session when using a fresh ``devpi-server`` instance, installing itself in a fresh virtualenv:: hpk at teta:~/p/devpi-server$ virtualenv devpi >/dev/null hpk at teta:~/p/devpi-server$ source devpi/bin/activate (devpi) hpk at teta:~/p/devpi-server$ time pip install -q \ -i http://localhost:3141/ext/pypi/simple/ devpi-server real 21.971s user 1.564s system 0.420s So that took 21 seconds. Now lets remove the virtualenv, recreate it and install a second time:: (devpi) hpk at teta:~/p/devpi-server$ rm -rf devpi (devpi) hpk at teta:~/p/devpi-server$ virtualenv devpi >/dev/null (devpi)hpk at teta:~/p/devpi-server$ time pip install -q -i http://localhost:3141/ext/pypi/simple/ devpi-server real 1.716s user 1.152s system 0.472s Ok, that was more than 10 times faster. The install of ``devpi-server`` (0.7) involves five packages btw: ``beautifulsoup4, bottle, py, redis, requests``. Compatibility -------------------- ``devpi-server`` works with python2.6 and python2.7 on both Linux and Windows environments. Windows support is somewhat experimental -- better don't run a company wide server with it. OSX is untested as of now but no issues are expected -- please report back how things work in reality. ``devpi-server`` requires ``redis-server`` with versions 2.4 or later. Earlier versions may or may not work (untested). Deployment notes ---------------------------- By default, devpi-server configures and starts its own redis instance. For this it needs to find a ``redis-server`` executable. On windows it will, in addition to the PATH variable, also check for ``c:\\program files\redis\redis-server.exe`` which is the default install location for the `windows redis fork installer `_. In a consolidated setting you might want to use the ``--redismode=manual`` and ``--redisport NUM`` options to rather control the setup of redis yourself. You might also want to use the ``--datadir`` option to specify where release files will be cached. Lastly, if you run ``devpi-server`` in a company network, you can for example proxy-serve the application through an nginx site configuration like this:: # sample nginx conf server { server_name your.server.name; listen 80; root /home/devpi/.devpi/httpfiles/; # arbitrary for now location / { proxy_pass http://localhost:3141; proxy_set_header X-Real-IP $remote_addr; } } command line options --------------------- A list of all devpi-server options:: $ devpi-server -h Usage: devpi-server [options] Options: -h, --help show this help message and exit main options: --version show devpi_version (0.7) --datadir=DIR data directory for devpi-server [~/.devpi/serverdata] --port=PORT port to listen for http requests [3141] --redisport=PORT redis server port number [3142] --redismode=auto|manual whether to start redis as a sub process [auto] --bottleserver=TYPE bottle server class, you may try eventlet or others [wsgiref] --debug run wsgi application with debug logging pypi upstream options: --pypiurl=url base url of remote pypi server [https://pypi.python.org/] --refresh=SECS periodically pull changes from pypi.python.org [60] Project status and next steps ----------------------------- ``devpi-server`` is considered beta because it's just an initial release. It is is tested through tox and has all of its automated pytest suite passing for python2.7 and python2.6 on Ubuntu 12.04 and Windows 7. ``devpi-server`` is actively developed and is bound to see more releases in 2013, in particular for supporting private indexes and a new development and testing workflow system. You are very welcome to join, discuss and contribute: * mailing list: https://groups.google.com/d/forum/devpi-dev * repository: http://bitbucket.org/hpk42/devpi-server * issues: http://bitbucket.org/hpk42/devpi-server/issues * irc: for now on #pylib on irc.freenode.net. * pypi home page: https://pypi.python.org/pypi/devpi-server