From briandorsey at gmail.com Sun May 3 21:07:11 2009 From: briandorsey at gmail.com (Brian Dorsey) Date: Sun, 3 May 2009 12:07:11 -0700 Subject: [py-dev] Gitting rid of "(inconsistently failed then succeeded)" ? In-Reply-To: <200904281303.n3SD3JxT025552@theraft.openend.se> References: <49F090A2.3010101@freehackers.org> <1240569528.14532.16.camel@neil> <49F1B470.7020006@freehackers.org> <20090428101454.GB11963@trillke.net> <200904281054.n3SAsIIs013660@theraft.openend.se> <20090428114036.GI11963@trillke.net> <200904281303.n3SD3JxT025552@theraft.openend.se> Message-ID: <66e877b70905031207o3af67c62t7f3cd97f05fb33b0@mail.gmail.com> On Tue, Apr 28, 2009 at 6:03 AM, Laura Creighton wrote: > In a message of Tue, 28 Apr 2009 13:40:36 +0200, holger krekel writes: >>E: ? ? ?AssertionError: (assertion failed, but when it was re-run for >>printing intermediate values, it did not fail. ?Suggestions: >>compute assert expression before the assert or use --nomagic) > > Looks great to me. :) Quick thanks to Philippe for bringing this up and to everyone else for taking the time to carefully work on this error message. It seems like a small thing, but having a good error message in places like this is the difference between taking a moment or two to change a test vs. hitting a wall with py.test and maybe giving up entirely. Take care, -Brian From cfbolz at gmx.de Mon May 4 23:46:38 2009 From: cfbolz at gmx.de (Carl Friedrich Bolz) Date: Mon, 04 May 2009 23:46:38 +0200 Subject: [py-dev] Gitting rid of "(inconsistently failed then succeeded)" ? In-Reply-To: <66e877b70905031207o3af67c62t7f3cd97f05fb33b0@mail.gmail.com> References: <49F090A2.3010101@freehackers.org> <1240569528.14532.16.camel@neil> <49F1B470.7020006@freehackers.org> <20090428101454.GB11963@trillke.net> <200904281054.n3SAsIIs013660@theraft.openend.se> <20090428114036.GI11963@trillke.net> <200904281303.n3SD3JxT025552@theraft.openend.se> <66e877b70905031207o3af67c62t7f3cd97f05fb33b0@mail.gmail.com> Message-ID: <49FF623E.5070508@gmx.de> Brian Dorsey wrote: > On Tue, Apr 28, 2009 at 6:03 AM, Laura Creighton wrote: >> In a message of Tue, 28 Apr 2009 13:40:36 +0200, holger krekel writes: >>> E: AssertionError: (assertion failed, but when it was re-run for >>> printing intermediate values, it did not fail. Suggestions: >>> compute assert expression before the assert or use --nomagic) > >> Looks great to me. :) > > Quick thanks to Philippe for bringing this up and to everyone else for > taking the time to carefully work on this error message. It seems like > a small thing, but having a good error message in places like this is > the difference between taking a moment or two to change a test vs. > hitting a wall with py.test and maybe giving up entirely. I agree. I actually had to explain this to students during the exercises for our lectures once or twice, so a clear error message should definitely improve things. Cheers, Carl Friedrich From holger at merlinux.eu Wed May 13 20:19:14 2009 From: holger at merlinux.eu (holger krekel) Date: Wed, 13 May 2009 20:19:14 +0200 Subject: [py-dev] new parametrized tests / deprecating yield Message-ID: <20090513181914.GA7437@trillke.net> Hi folks, for those not following my blog, here is my post about the newstyle generative tests which are to substitute and improve on "yield" based tests, among other parametrization methods: http://tetamap.wordpress.com/2009/05/13/parametrizing Hope to get another py.test beta out next week that would contain this. I am still set to finish refining and documenting extension hooks ... the above was part of that effort although i originally planned the "parametrizing" hook as a post 1.0 feature. Anyway, let me know what you think - there is still time to do some adjustments or changes especially to funcargs. cheers, holger -- Metaprogramming, Python, Testing: http://tetamap.wordpress.com Python, PyPy, pytest contracting: http://merlinux.eu From sridharr at activestate.com Fri May 22 00:29:35 2009 From: sridharr at activestate.com (Sridhar Ratnakumar) Date: Thu, 21 May 2009 15:29:35 -0700 Subject: [py-dev] new parametrized tests / deprecating yield In-Reply-To: <20090513181914.GA7437@trillke.net> References: <20090513181914.GA7437@trillke.net> Message-ID: <4A15D5CF.3010205@activestate.com> > holger: [...] The new way to parametrize test is meant to substitute > yield usage of test-functions aka "generative tests", also used by > nosetests. yield-style Generative tests have received criticism and > despite being the one who invented them, i mostly agree and recommend > not using them anymore. I use `yield' to run 'sub-tests' in sequential order. For example, in this particular usecase: http://gist.github.com/115787 You'll notice I have to run test_install, test_list_all, test_import, test_remove in that *order*. It is not possible to make them methods of a class and use `pytest_generate_tests' to run them in sequential order because each of these sub-tests, besides the order, also depend on their position of execution (i.e., test_search is to be run after the statement ``c.do_update(None, None, repo_root_url)``) `yield' achieves this elegantly; I don't know how one would achieve this requirement otherwise. BTW, there is no way for me to use funcargs in yield based tests (which is why I'm calling the setup function, `prepare_client', manually). Cheers, Sridhar On 09-05-13 11:19 AM, holger krekel wrote: > Hi folks, > > for those not following my blog, here is my post > about the newstyle generative tests which are > to substitute and improve on "yield" based tests, > among other parametrization methods: > > http://tetamap.wordpress.com/2009/05/13/parametrizing > > Hope to get another py.test beta out next week that would > contain this. I am still set to finish refining and > documenting extension hooks ... the above was part of that > effort although i originally planned the "parametrizing" hook > as a post 1.0 feature. > > Anyway, let me know what you think - there is still time to do > some adjustments or changes especially to funcargs. > > cheers, > holger > From holger at merlinux.eu Fri May 22 09:41:19 2009 From: holger at merlinux.eu (holger krekel) Date: Fri, 22 May 2009 09:41:19 +0200 Subject: [py-dev] new parametrized tests / deprecating yield In-Reply-To: <4A15D5CF.3010205@activestate.com> References: <20090513181914.GA7437@trillke.net> <4A15D5CF.3010205@activestate.com> Message-ID: <20090522074119.GT7437@trillke.net> Hi Sridhar, thanks for sharing the use case! On Thu, May 21, 2009 at 15:29 -0700, Sridhar Ratnakumar wrote: > > holger: [...] The new way to parametrize test is meant to substitute > > yield usage of test-functions aka "generative tests", also used by > > nosetests. yield-style Generative tests have received criticism and > > despite being the one who invented them, i mostly agree and recommend > > not using them anymore. > > I use `yield' to run 'sub-tests' in sequential order. For example, in > this particular usecase: > > http://gist.github.com/115787 > > You'll notice I have to run test_install, test_list_all, test_import, > test_remove in that *order*. yes, ok. are you aware that py.test runs test function in the order in which they appear in the test module file, btw? > It is not possible to make them methods of a class and use > `pytest_generate_tests' to run them in sequential order because each of > these sub-tests, besides the order, also depend on their position of > execution (i.e., test_search is to be run after the statement > ``c.do_update(None, None, repo_root_url)``) > > `yield' achieves this elegantly; I don't know how one would achieve this > requirement otherwise. If one of the yielded tests fails, should the rest of the yielded tests better not run at all? Would you like to reuse the yielded test functions for other test cases? > BTW, there is no way for me to use funcargs in yield based tests (which > is why I'm calling the setup function, `prepare_client', manually). That's intentional because of the deprecation intent. However, i'd like to understand your use case better. I get the impression that something else/more than the current funcarg/generate mechanisms is needed to address it nicely. So please also state openly any problems/wishes you have with the current yield-way of doing things. thanks & cheers, holger > Cheers, > Sridhar > > On 09-05-13 11:19 AM, holger krekel wrote: > > Hi folks, > > > > for those not following my blog, here is my post > > about the newstyle generative tests which are > > to substitute and improve on "yield" based tests, > > among other parametrization methods: > > > > http://tetamap.wordpress.com/2009/05/13/parametrizing > > > > Hope to get another py.test beta out next week that would > > contain this. I am still set to finish refining and > > documenting extension hooks ... the above was part of that > > effort although i originally planned the "parametrizing" hook > > as a post 1.0 feature. > > > > Anyway, let me know what you think - there is still time to do > > some adjustments or changes especially to funcargs. > > > > cheers, > > holger > > > > _______________________________________________ > py-dev mailing list > py-dev at codespeak.net > http://codespeak.net/mailman/listinfo/py-dev > -- Metaprogramming, Python, Testing: http://tetamap.wordpress.com Python, PyPy, pytest contracting: http://merlinux.eu From sridharr at activestate.com Fri May 22 19:43:18 2009 From: sridharr at activestate.com (Sridhar Ratnakumar) Date: Fri, 22 May 2009 10:43:18 -0700 Subject: [py-dev] new parametrized tests / deprecating yield In-Reply-To: <20090522074119.GT7437@trillke.net> References: <20090513181914.GA7437@trillke.net> <4A15D5CF.3010205@activestate.com> <20090522074119.GT7437@trillke.net> Message-ID: <4A16E436.1060002@activestate.com> Hello Holger, On 09-05-22 12:41 AM, holger krekel wrote: > are you aware that py.test runs test function in the order > in which they appear in the test module file, btw? Is this by design? Can I always expect the functions and method be run in the defined order? Vis.: [quote] 'Tests usually run in the order in which they appear in the files. However, tests should not rely on running one after another, as this prevents more advanced usages: running tests distributedly or selectively, or in "looponfailing" mode, will cause them to run in random order.'[endquote] The keyword /usually/ suggests to me that that may not be the case always. > If one of the yielded tests fails, should the rest of the yielded tests > better not run at all? Correct. In my case, the rest of the yielded tests should not run. > Would you like to reuse the yielded test functions for other test cases? Usually not, but sometimes (if the tests are defined in reusable fashion), yes. > i'd like to understand your use case better. I get the impression > that something else/more than the current funcarg/generate mechanisms is needed > to address it nicely. So please also state openly any problems/wishes you > have with the current yield-way of doing things. I gave some thought to this.. and let me explain: I have this (conceptually big) test case for which I want detailed reporting. This test case is the function ``test_typical_usecase``. All the test_* functions defined inside this function are parts of the parent test. If one of them fails, then of course the whole test is considered to be failed and thus rest of them should not run (this is a bug in my current test code as it continues to run them). I guess what I actually want out of this 'splitting' is fine-grained reporting. That is, if, say, test_import fails.. I should see FAIL for test_import so that I can immediately see where the problem is. [1] http://gist.github.com/115787 [2] http://gist.github.com/116260 Here, if test_import fails.. ``test_typical_usecase['test_import']`` is shown to be failing in [1] (fine-grained reporting) .. but this is not a correct way to do it.. as rest of the tests continue to run. In [2], test_typical_usecase is shown to be failing (not fine-grained reporting). Cheers, Sridhar From holger at merlinux.eu Mon May 25 14:44:34 2009 From: holger at merlinux.eu (holger krekel) Date: Mon, 25 May 2009 14:44:34 +0200 Subject: [py-dev] new parametrized tests / deprecating yield In-Reply-To: <4A16E436.1060002@activestate.com> References: <20090513181914.GA7437@trillke.net> <4A15D5CF.3010205@activestate.com> <20090522074119.GT7437@trillke.net> <4A16E436.1060002@activestate.com> Message-ID: <20090525124434.GD7437@trillke.net> Hi Sridhar, On Fri, May 22, 2009 at 10:43 -0700, Sridhar Ratnakumar wrote: > Hello Holger, > > On 09-05-22 12:41 AM, holger krekel wrote: >> are you aware that py.test runs test function in the order >> in which they appear in the test module file, btw? > > Is this by design? Can I always expect the functions and method be run > in the defined order? > > Vis.: > [quote] 'Tests usually run in the order in which they appear in the > files. However, tests should not rely on running one after another, as > this prevents more advanced usages: running tests distributedly or > selectively, or in "looponfailing" mode, will cause them to run in > random order.'[endquote] > > The keyword /usually/ suggests to me that that may not be the case always. You are right to point this out. It's been in the docs for a while. I actually think tests will always run in the order in which they appear in the file *if* those tests are executed in the same process. However, distribution and looponfailing (and a potential randomizing plugin) may schedule the execution of each of a group of functions to different processes. There is no way currently to signal "the functions of this test class need to run consecutively and together in the same process". >> If one of the yielded tests fails, should the rest of the yielded tests >> better not run at all? > > Correct. In my case, the rest of the yielded tests should not run. guess so. there is no way to express this currently - even goes somewhat against the original idea of yield-mediated tests >> Would you like to reuse the yielded test functions for other test cases? > > Usually not, but sometimes (if the tests are defined in reusable > fashion), yes. ok. >> i'd like to understand your use case better. I get the impression >> that something else/more than the current funcarg/generate mechanisms is needed >> to address it nicely. So please also state openly any problems/wishes you >> have with the current yield-way of doing things. > > I gave some thought to this.. and let me explain: > > I have this (conceptually big) test case for which I want detailed > reporting. This test case is the function ``test_typical_usecase``. All > the test_* functions defined inside this function are parts of the > parent test. > > If one of them fails, then of course the whole test is considered to be > failed and thus rest of them should not run (this is a bug in my current > test code as it continues to run them). > > > I guess what I actually want out of this 'splitting' is fine-grained > reporting. That is, if, say, test_import fails.. I should see FAIL for > test_import so that I can immediately see where the problem is. > > [1] http://gist.github.com/115787 > [2] http://gist.github.com/116260 > > Here, if test_import fails.. ``test_typical_usecase['test_import']`` is > shown to be failing in [1] (fine-grained reporting) .. but this is not a > correct way to do it.. as rest of the tests continue to run. > > In [2], test_typical_usecase is shown to be failing (not fine-grained > reporting). i'd like to consider two direct ways of improving reporting. first possibility: def test_typical_usecase(repcontrol): packages = packages_small_list c, repo_root_url = prepare_client(packages) c.do_update(None, None, repo_root_url) repcontrol.section("test_search") ... repcontrol.section("test_list_all") ... You would not need to have things defined in a test function but of course you could still do it. Running this test would show one or possibly more dots. If the test fails, the failure report could maybe look something like this: def test_typical_usecase(repcontrol): OK setup OK test_search FAIL test_list_all ... and after FAIL you would see the part of the traceback after the above 'repcontrol.section("test_list_all")' line. Stdout/Stderr capturing could probably also be made to present only the parts relating to the failing part. the second possibility is to write a plugin that implements an "IncrementalTestCase": class IncrementalTestCase: def setup(self): self.packages = packages_small_list self.c, self.repo_root_url = prepare_client(packages) self.c.do_update(None, None, repo_root_url) def search(self): for pkg in self.packages: sample_keyword = pkg['name'][:3] logger.info('Searching for `%s` expecting `%s`', sample_keyword, pkg['name']) results = [p.name for p in c.do_search(None, None, sample_keyword)] logger.info('Got results: %s', results) ... this would run one function after another in the order in which they appear. If a function fails, it would abort the whole case. This scheme makes it easy to reuse functions for another test case variant. The class would get discovered by the "IncrementalTest" or maybe just "IncTest" name, i guess. Let me know what you think or if you have other ideas. cheers, holger -- Metaprogramming, Python, Testing: http://tetamap.wordpress.com Python, PyPy, pytest contracting: http://merlinux.eu