From peloko45 at gmail.com Mon Feb 1 13:19:13 2010 From: peloko45 at gmail.com (Joan Miller) Date: Mon, 1 Feb 2010 12:19:13 +0000 Subject: [py-dev] Tear down into a class Message-ID: Is possible to have a tear down function at class-level? From peloko45 at gmail.com Mon Feb 1 13:22:28 2010 From: peloko45 at gmail.com (Joan Miller) Date: Mon, 1 Feb 2010 12:22:28 +0000 Subject: [py-dev] setuptools Message-ID: Are there plans for the integration with setuptools/distribute? So I would that py.test were run using *python setup.py py.test* or anything so, as is made with nosetools From holger at merlinux.eu Mon Feb 1 15:30:15 2010 From: holger at merlinux.eu (holger krekel) Date: Mon, 1 Feb 2010 15:30:15 +0100 Subject: [py-dev] setuptools In-Reply-To: References: Message-ID: <20100201143015.GZ6083@trillke.net> On Mon, Feb 01, 2010 at 12:22 +0000, Joan Miller wrote: > Are there plans for the integration with setuptools/distribute? > > So I would that py.test were run using *python setup.py py.test* or > anything so, as is made with nosetools I've seen some discussions, haven't played much with it myself yet. Googled a bit and came up with the below patch to one of my (non-py) packages. With it I could successfully do: python setup.py test # will run "py.test" and it would also take care to temporarily install "py" just for the testing and not as a general dependency. Happy to hear if this works for you and others as well. cheers, holger diff --git a/setup.py b/setup.py --- a/setup.py +++ b/setup.py @@ -3,6 +3,7 @@ if sys.version_info >= (3,0): from distribute_setup import use_setuptools use_setuptools() from setuptools import setup +from setuptools.command.test import test long_description = """ ciss: code-centered single-file "ISSUES.txt" issue tracking license='MIT license', platforms=['unix', 'linux', 'osx', 'cygwin', 'win32'], author='holger krekel', author_email='holger at merlinux.eu', + cmdclass = {'test': PyTest}, + tests_require = ['py'], entry_points={'console_scripts': [ 'ciss = ciss:main', ]}, @@ -39,6 +42,15 @@ def main(): zip_safe=False, ) +class PyTest(test): + user_options = [] + def initialize_options(self): + test.initialize_options(self) + self.test_suite = "." + def run_tests(self): + import py + py.cmdline.pytest(['.']) + if __name__ == '__main__': main() From peloko45 at gmail.com Mon Feb 1 17:14:12 2010 From: peloko45 at gmail.com (Joan Miller) Date: Mon, 1 Feb 2010 16:14:12 +0000 Subject: [py-dev] setuptools In-Reply-To: <20100201143015.GZ6083@trillke.net> References: <20100201143015.GZ6083@trillke.net> Message-ID: The ideal would be to pass a collector to the option 'test_suite' in setuptools, i.e. for nose [1] is used: test_suite = "nose.collector", This way has been added too to Pip [2]. Here the info. about that option [3] [1] http://somethingaboutorange.com/mrl/projects/nose/0.11.1/setuptools_integration.html [2] http://ericholscher.com/blog/2009/nov/5/adding-testing-pip/ [3] http://peak.telecommunity.com/DevCenter/setuptools#test 2010/2/1 holger krekel : > On Mon, Feb 01, 2010 at 12:22 +0000, Joan Miller wrote: >> Are there plans for the integration with setuptools/distribute? >> >> So I would that py.test were run using *python setup.py py.test* or >> anything so, as is made with nosetools > > I've seen some discussions, haven't played much with it myself yet. > Googled a bit and came up with the below patch to one of my > (non-py) packages. ?With it I could successfully do: > > ? ?python setup.py test # will run "py.test" > > and it would also take care to temporarily install "py" just for the > testing and not as a general dependency. ?Happy to hear if this works > for you and others as well. > > cheers, > holger > > > diff --git a/setup.py b/setup.py > --- a/setup.py > +++ b/setup.py > @@ -3,6 +3,7 @@ if sys.version_info >= (3,0): > ? ? from distribute_setup import use_setuptools > ? ? use_setuptools() > ?from setuptools import setup > +from setuptools.command.test import test > > ?long_description = """ > ?ciss: code-centered single-file "ISSUES.txt" issue tracking > ? ? ? ? license='MIT license', > ? ? ? ? platforms=['unix', 'linux', 'osx', 'cygwin', 'win32'], > ? ? ? ? author='holger krekel', > ? ? ? ? author_email='holger at merlinux.eu', > + ? ? ? ?cmdclass = {'test': PyTest}, > + ? ? ? ?tests_require = ['py'], > ? ? ? ? entry_points={'console_scripts': [ > ? ? ? ? ? ? 'ciss = ciss:main', > ? ? ? ? ]}, > @@ -39,6 +42,15 @@ def main(): > ? ? ? ? zip_safe=False, > ? ? ) > > +class PyTest(test): > + ? ?user_options = [] > + ? ?def initialize_options(self): > + ? ? ? ?test.initialize_options(self) > + ? ? ? ?self.test_suite = "." > + ? ?def run_tests(self): > + ? ? ? ?import py > + ? ? ? ?py.cmdline.pytest(['.']) > + > ?if __name__ == '__main__': > ? ? main() > > From holger at merlinux.eu Mon Feb 1 17:30:21 2010 From: holger at merlinux.eu (holger krekel) Date: Mon, 1 Feb 2010 17:30:21 +0100 Subject: [py-dev] setuptools In-Reply-To: References: <20100201143015.GZ6083@trillke.net> Message-ID: <20100201163021.GC6083@trillke.net> Hi Joan, On Mon, Feb 01, 2010 at 16:14 +0000, Joan Miller wrote: > The ideal would be to pass a collector to the option 'test_suite' in > setuptools, i.e. for nose [1] is used: > > test_suite = "nose.collector", > > This way has been added too to Pip [2]. Here the info. about that option [3] This expects a unittest loader which py.test doesn't provide (yet). Your [2] also has some critical comments btw - and in general i think the current "test-during-setup/install-time" needs more thought. I'd rather fancy a mechanism to run a script with some developer-defined arguments and to allow specification of "packages needed for testing". On a practical note, did you try out the solution i sketched? It basically implements exactly that in few lines of code with the current setuptools/distribute infrastructure. If anybody wants to chime in - feel free to :) holger > > [1] http://somethingaboutorange.com/mrl/projects/nose/0.11.1/setuptools_integration.html > [2] http://ericholscher.com/blog/2009/nov/5/adding-testing-pip/ > [3] http://peak.telecommunity.com/DevCenter/setuptools#test > > > 2010/2/1 holger krekel : > > On Mon, Feb 01, 2010 at 12:22 +0000, Joan Miller wrote: > >> Are there plans for the integration with setuptools/distribute? > >> > >> So I would that py.test were run using *python setup.py py.test* or > >> anything so, as is made with nosetools > > > > I've seen some discussions, haven't played much with it myself yet. > > Googled a bit and came up with the below patch to one of my > > (non-py) packages. ?With it I could successfully do: > > > > ? ?python setup.py test # will run "py.test" > > > > and it would also take care to temporarily install "py" just for the > > testing and not as a general dependency. ?Happy to hear if this works > > for you and others as well. > > > > cheers, > > holger > > > > > > diff --git a/setup.py b/setup.py > > --- a/setup.py > > +++ b/setup.py > > @@ -3,6 +3,7 @@ if sys.version_info >= (3,0): > > ? ? from distribute_setup import use_setuptools > > ? ? use_setuptools() > > ?from setuptools import setup > > +from setuptools.command.test import test > > > > ?long_description = """ > > ?ciss: code-centered single-file "ISSUES.txt" issue tracking > > ? ? ? ? license='MIT license', > > ? ? ? ? platforms=['unix', 'linux', 'osx', 'cygwin', 'win32'], > > ? ? ? ? author='holger krekel', > > ? ? ? ? author_email='holger at merlinux.eu', > > + ? ? ? ?cmdclass = {'test': PyTest}, > > + ? ? ? ?tests_require = ['py'], > > ? ? ? ? entry_points={'console_scripts': [ > > ? ? ? ? ? ? 'ciss = ciss:main', > > ? ? ? ? ]}, > > @@ -39,6 +42,15 @@ def main(): > > ? ? ? ? zip_safe=False, > > ? ? ) > > > > +class PyTest(test): > > + ? ?user_options = [] > > + ? ?def initialize_options(self): > > + ? ? ? ?test.initialize_options(self) > > + ? ? ? ?self.test_suite = "." > > + ? ?def run_tests(self): > > + ? ? ? ?import py > > + ? ? ? ?py.cmdline.pytest(['.']) > > + > > ?if __name__ == '__main__': > > ? ? main() > > > > > -- From holger at merlinux.eu Tue Feb 2 11:08:06 2010 From: holger at merlinux.eu (holger krekel) Date: Tue, 2 Feb 2010 11:08:06 +0100 Subject: [py-dev] Tear down into a class In-Reply-To: References: Message-ID: <20100202100806.GE6083@trillke.net> On Mon, Feb 01, 2010 at 12:19 +0000, Joan Miller wrote: > Is possible to have a tear down function at class-level? http://codespeak.net/py/dist/test/xunit_setup.html H. From holger at merlinux.eu Sat Feb 6 11:09:06 2010 From: holger at merlinux.eu (holger krekel) Date: Sat, 6 Feb 2010 11:09:06 +0100 Subject: [py-dev] new py.cleanup options Message-ID: <20100206100906.GE6083@trillke.net> Hi all, for those interested (related to the earlier py.cleanup discussion) i just committed some improvements. Options now work like this: py.cleanup # remove "*.pyc" and "*$py.class" (jython) files py.cleanup -e .swp -e .cache # also remove files with these extensions py.cleanup -s # remove "build" and "dist" directory next to setup.py files py.cleanup -d # also remove empty directories py.cleanup -a # synonym for "-s -d -e 'pip-log.txt'" py.cleanup -n # dry run, only show what would be removed If you don't specify path(s), the current working dir will be used. Note that i don't remove "*egg-info" directories - it would break "python setup.py develop" and i'd like to keep the py.cleanup command safe in this regard. If you like to play, use easy_install or pip-install on: http://codespeak.net/~hpk/py-1.2.0post1.tar.gz cheers, holger From holger at merlinux.eu Sun Feb 7 17:55:04 2010 From: holger at merlinux.eu (holger krekel) Date: Sun, 7 Feb 2010 17:55:04 +0100 Subject: [py-dev] basic py.test tutorial - draft Message-ID: <20100207165504.GK6083@trillke.net> Hi all, i've just completed my draft for the Pycon tutorial "rapid multi-purpose testing with py.test". see here for PDF and example directories: hg clone http://bitbucket.org/hpk42/pytest-tutorial1/ or just grab the PDF here: http://bitbucket.org/hpk42/pytest-tutorial1/raw/5a8deeba9184/pytest-basic.pdf If anybody feels like walking through it and sending me feedback, that'd be much appreciated. It covers - basic usage - marking and skipping - funcargs - the "mysetup" pattern in successive examples. cheers, holger From holger at merlinux.eu Mon Feb 8 17:37:31 2010 From: holger at merlinux.eu (holger krekel) Date: Mon, 8 Feb 2010 17:37:31 +0100 Subject: [py-dev] py.test 1.2.1 released Message-ID: <20100208163731.GM6083@trillke.net> Hi all, i just released py-1.2.1 - thanks for all your feedback and suggestions here and on IRC. The announcement and changes: http://codespeak.net/py/dist/announce/release-1.2.1.html main fixes and improvements: * --funcargs [testpath] will show available builtin- and project funcargs. * display a short and concise traceback if funcarg lookup fails. * early-load "conftest.py" files in non-dot first-level sub directories. * --tb=line will print a single line for each failing test (issue67) * py.cleanup has a number of new options, cleans setup.py related files * always call python-level teardown functions even if setup failed (issue78). a detailed CHANGELOG is inlined below. cheers, holger P.S.: After Pycon US i am going to be offline until beginning april. Looking forward to it, btw! :) Changes between 1.2.1 and 1.2.0 ===================================== - refined usage and options for "py.cleanup":: py.cleanup # remove "*.pyc" and "*$py.class" (jython) files py.cleanup -e .swp -e .cache # also remove files with these extensions py.cleanup -s # remove "build" and "dist" directory next to setup.py files py.cleanup -d # also remove empty directories py.cleanup -a # synonym for "-s -d -e 'pip-log.txt'" py.cleanup -n # dry run, only show what would be removed - add a new option "py.test --funcargs" which shows available funcargs and their help strings (docstrings on their respective factory function) for a given test path - display a short and concise traceback if a funcarg lookup fails - early-load "conftest.py" files in non-dot first-level sub directories. allows to conveniently keep and access test-related options in a ``test`` subdir and still add command line options. - fix issue67: new super-short traceback-printing option: "--tb=line" will print a single line for each failing (python) test indicating its filename, lineno and the failure value - fix issue78: always call python-level teardown functions even if the according setup failed. This includes refinements for calling setup_module/class functions which will now only be called once instead of the previous behaviour where they'd be called multiple times if they raise an exception (including a Skipped exception). Any exception will be re-corded and associated with all tests in the according module/class scope. - fix issue63: assume <40 columns to be a bogus terminal width, default to 80 - fix pdb debugging to be in the correct frame on raises-related errors - update apipkg.py to fix an issue where recursive imports might unnecessarily break importing - fix plugin links From holger at merlinux.eu Fri Feb 12 13:57:16 2010 From: holger at merlinux.eu (holger krekel) Date: Fri, 12 Feb 2010 13:57:16 +0100 Subject: [py-dev] testrun.org, hudson, py.test Message-ID: <20100212125716.GX6083@trillke.net> hi all, FYI i sent a mail regarding a new site http://testrun.org to the testing-in-python list, see below. Of course the contained offer applies to you here as well - may be useful for people using py.test to look into how it's configured for Hudson, a popular CI server written in Java. cheers, holger ----- Forwarded message from holger krekel ----- Date: Wed, 10 Feb 2010 15:02:38 +0100 From: holger krekel To: Testing in Python User-Agent: Mutt/1.5.17+20080114 (2008-01-14) Subject: [TIP] starting testrun.org for testing in python Hi all, This year i plan to work a lot on testing and Python - and one of my plans is to build up some collective computing resources for testing. Not tied to any particular testing tool or practise but rather to Continous Integration methods. Here is the practical start of this effort:: http://testrun.org It doesn't have a fancy website such as Snakebite - in fact i really try to outcompete Titus on KISS principles here and just start operating it :) Note that I am using Hudson to get to know the current state of the art - i realize it's not a perfect fit for Python. It does have tons of plugins though and is a standard in the Java world (guess-what i actually just did some open-source strategy consulting in a Java shop and thus dived in :). We can add more jobs and views to it and if anyone here has experience and wants to help - i grant admin rights easily. Particularly since i am away after Pycon for 6 weeks. It'd also be great if some Snakebite resources or other resources could be added - just contact me or grab me at Pycon. It should not take more than 5 minutes to set a new node. Given interest i'll setup a mailing list where we discuss matters further - otherwise i presume it's ok to communicate a bit about this here. Any feedback and ideas welcome. And again, this is *not* py.test specific although i certainly want to make py.test integrate nicely. cheers, holger _______________________________________________ testing-in-python mailing list testing-in-python at lists.idyll.org http://lists.idyll.org/listinfo/testing-in-python ----- End forwarded message ----- -- From pontus.astrom at businessecurity.com Fri Feb 12 14:32:42 2010 From: pontus.astrom at businessecurity.com (=?ISO-8859-1?Q?Pontus_=C5str=F6m?=) Date: Fri, 12 Feb 2010 14:32:42 +0100 Subject: [py-dev] Acceptance testing with py.test Message-ID: <4B75587A.2060208@businessecurity.com> Hi, I'm considering a plugin for doing acceptance testing with py.test. The core idea is to mimic the behaviour of Concordion (http://concordion.org) with respect to visual effects and reports, but use a python engine. The python libs I am considering are py.test for running tests and genshi for xml instrumentation. Then, to get a polished plugin some code need to be written; a py.test plugin for test collection and reporting, and a utility library for easing genshi xml insturmentation. Well, the reason for this explanation is to see if anybody out there has some input regarding the idea, or maybe know of some useful libs or approaches to achieve the aim, which btw. is to obtain efficient, readable, executable acceptance tests connected to requirements. Read more about this aim on the Concordion website. Cheers, Pontus From holger at merlinux.eu Fri Feb 12 22:23:47 2010 From: holger at merlinux.eu (holger krekel) Date: Fri, 12 Feb 2010 22:23:47 +0100 Subject: [py-dev] Acceptance testing with py.test In-Reply-To: <4B75587A.2060208@businessecurity.com> References: <4B75587A.2060208@businessecurity.com> Message-ID: <20100212212347.GD6083@trillke.net> Hi Pontus, On Fri, Feb 12, 2010 at 14:32 +0100, Pontus ?str?m wrote: > I'm considering a plugin for doing acceptance testing with py.test. The > core idea is to mimic the behaviour of Concordion > (http://concordion.org) with respect to visual effects and reports, but > use a python engine. The python libs I am considering are py.test for > running tests and genshi for xml instrumentation. I think that's a good plan :) > Then, to get a polished plugin some code need to be written; a py.test > plugin for test collection and reporting, and a utility library for > easing genshi xml insturmentation. > > Well, the reason for this explanation is to see if anybody out there has > some input regarding the idea, or maybe know of some useful libs or > approaches to achieve the aim, which btw. is to obtain efficient, > readable, executable acceptance tests connected to requirements. Read > more about this aim on the Concordion website. Myself, i haven't too much acceptance testing with non-python domains. I hear that Ruby is strong with domain-specific testing but haven't checked on things myself yet. As far as giving you some ideas on how the py.test plugin side might look like i hope that Ronny Pfannschmidt or Ali Afshar can provide a pointer to their code for driving tests from yaml definitions as a starting point. cheers, holger From Ronny.Pfannschmidt at gmx.de Sat Feb 13 13:17:40 2010 From: Ronny.Pfannschmidt at gmx.de (Ronny Pfannschmidt) Date: Sat, 13 Feb 2010 13:17:40 +0100 Subject: [py-dev] pytest codecheck plugin, request for comments/review Message-ID: <1266063460.9546.119.camel@localhost> Hi, i just finished the initial version of my pytest-codecheck plugin its availiable at http://bitbucket.org/RonnyPfannschmidt/pytest-pycheckers/ currently it is not configurable, it just runs pep8 based on the moinmoin settings and pyflakes it lacks some idea of caching runs, it should auto-skip in case of codecheck-mtime > code-mtime regards Ronny From holger at merlinux.eu Sat Feb 13 13:27:13 2010 From: holger at merlinux.eu (holger krekel) Date: Sat, 13 Feb 2010 13:27:13 +0100 Subject: [py-dev] pytest codecheck plugin, request for comments/review In-Reply-To: <1266063460.9546.119.camel@localhost> References: <1266063460.9546.119.camel@localhost> Message-ID: <20100213122713.GH6083@trillke.net> On Sat, Feb 13, 2010 at 13:17 +0100, Ronny Pfannschmidt wrote: > Hi, > > i just finished the initial version of my pytest-codecheck plugin cool! > its availiable at > http://bitbucket.org/RonnyPfannschmidt/pytest-pycheckers/ > > currently it is not configurable, it just runs pep8 based on the > moinmoin settings and pyflakes > > it lacks some idea of caching runs, it should auto-skip in case of > codecheck-mtime > code-mtime Or maybe do a command line option that drives/configures it? In any case, i think it'd be very worthwhile to "setup.py sdist register upload" already. holger From holger at merlinux.eu Sat Feb 13 15:08:00 2010 From: holger at merlinux.eu (holger krekel) Date: Sat, 13 Feb 2010 15:08:00 +0100 Subject: [py-dev] pytest codecheck plugin, request for comments/review In-Reply-To: <20100213122713.GH6083@trillke.net> References: <1266063460.9546.119.camel@localhost> <20100213122713.GH6083@trillke.net> Message-ID: <20100213140800.GI6083@trillke.net> On Sat, Feb 13, 2010 at 13:27 +0100, holger krekel wrote: > On Sat, Feb 13, 2010 at 13:17 +0100, Ronny Pfannschmidt wrote: > In any case, i think it'd be very worthwhile to > "setup.py sdist register upload" already. Ronny did that :) it's named pytest-codecheckers now and checks pep8 and pflakes compliancy for any python files you point it to do. If you have issues just send them here, i guess. holger From Ronny.Pfannschmidt at gmx.de Sun Feb 14 00:57:05 2010 From: Ronny.Pfannschmidt at gmx.de (Ronny Pfannschmidt) Date: Sun, 14 Feb 2010 00:57:05 +0100 Subject: [py-dev] Acceptance testing with py.test In-Reply-To: <20100212212347.GD6083@trillke.net> References: <4B75587A.2060208@businessecurity.com> <20100212212347.GD6083@trillke.net> Message-ID: <1266105425.9546.145.camel@localhost> On Fri, 2010-02-12 at 22:23 +0100, holger krekel wrote: > Hi Pontus, > > On Fri, Feb 12, 2010 at 14:32 +0100, Pontus ?str?m wrote: > > I'm considering a plugin for doing acceptance testing with py.test. The > > core idea is to mimic the behaviour of Concordion > > (http://concordion.org) with respect to visual effects and reports, but > > use a python engine. The python libs I am considering are py.test for > > running tests and genshi for xml instrumentation. > > I think that's a good plan :) > > > Then, to get a polished plugin some code need to be written; a py.test > > plugin for test collection and reporting, and a utility library for > > easing genshi xml insturmentation. > > > > Well, the reason for this explanation is to see if anybody out there has > > some input regarding the idea, or maybe know of some useful libs or > > approaches to achieve the aim, which btw. is to obtain efficient, > > readable, executable acceptance tests connected to requirements. Read > > more about this aim on the Concordion website. > I have some basic ideas about structuring that kind of test. A) steped reporting, so each test reports the current step B) collection of dependend test-items, having each test item as `step` A requires extending the py.test reporting (but might be easy) B requires extending the py.test test execution Ali currently does acceptance testing for webapps using approach B. The test collector is at http://bitbucket.org/aafshar/glashammer-testing/src/tip/glashammer/utils/testing.py however it currently lacks dependend tests support. We would also like to build acceptance testing tools for pygtk, but those seem more fit for approach A. We would need those for more complex applications, like http://pida.co.uk Another interesting acceptance test approach is the way mercurial is tested. Its using sectioned shell scripts that can be matched with sectioned expected output files. I will do some more Work in those Areas after 17.03 -- Ronny > Myself, i haven't too much acceptance testing with non-python domains. > I hear that Ruby is strong with domain-specific testing but haven't > checked on things myself yet. As far as giving you some ideas > on how the py.test plugin side might look like i hope that Ronny > Pfannschmidt or Ali Afshar can provide a pointer to their code > for driving tests from yaml definitions as a starting point. > > cheers, > holger From Ronny.Pfannschmidt at gmx.de Sun Feb 14 09:06:27 2010 From: Ronny.Pfannschmidt at gmx.de (Ronny Pfannschmidt) Date: Sun, 14 Feb 2010 09:06:27 +0100 Subject: [py-dev] RFC pytest metadata store plugin Message-ID: <1266134787.28016.11.camel@localhost> hi, i just got a basic item for a pytest caching plugin it would basically be a key-value store for test items/fspath items it should provide 2 objects as funcargs/py.test namesspace items path_meta store key-values for the current fspath item_meta store key-values for the current test item the need i have for such a feature is based in the speed issue of the codecheckers plugin, in particular pep8 is slow, and should be skipped in case of the file's mtime being less than the last run of the pep8 check and had no issues issues i havent clearly tought about yet is how to store and where to store options i currently can think of are: - xattr on the actual files/alternative streams - a sqlite db -- Ronny From Ronny.Pfannschmidt at gmx.de Sun Feb 14 09:20:06 2010 From: Ronny.Pfannschmidt at gmx.de (Ronny Pfannschmidt) Date: Sun, 14 Feb 2010 09:20:06 +0100 Subject: [py-dev] how to use StdCapture().call propperly (i.e. not breaking py.test -s) Message-ID: <1266135606.28016.13.camel@localhost> hi, currently the codecheckers breaks py.test -s, cause the nested capture stops to work once the parent capture is gone. -- Ronny From schettino72 at gmail.com Sun Feb 14 10:41:04 2010 From: schettino72 at gmail.com (Eduardo Schettino) Date: Sun, 14 Feb 2010 17:41:04 +0800 Subject: [py-dev] RFC pytest metadata store plugin In-Reply-To: <1266134787.28016.11.camel@localhost> References: <1266134787.28016.11.camel@localhost> Message-ID: On Sun, Feb 14, 2010 at 4:06 PM, Ronny Pfannschmidt wrote: > hi, > > i just got a basic item for a pytest caching plugin > > > it would basically be a key-value store for test items/fspath items > > it should provide 2 objects as funcargs/py.test namesspace items > > path_meta > ? ?store key-values for the current fspath > item_meta > ? ?store key-values for the current test item > > the need i have for such a feature is based in the speed issue of the > codecheckers plugin, in particular pep8 is slow, and should be skipped > in case of the file's mtime being less than the last run of the pep8 > check and had no issues I solved this problem using a different tool. You might want to take a look at "doit". it is more like a build-tool than a test runner. http://python-doit.sourceforge.net/ Regards, Eduardo From prologic at shortcircuit.net.au Sun Feb 14 15:48:41 2010 From: prologic at shortcircuit.net.au (James Mills) Date: Mon, 15 Feb 2010 00:48:41 +1000 Subject: [py-dev] Testing daemonize() (of sorts) Message-ID: Hi Folks, Not sure how big this community here is, but at the suggestion of Ronny, here's a problem I'd like to "throw" at you: Consider the following test for my library circuits (1): http://bitbucket.org/prologic/circuits-dev/src/tip/tests/app/test_daemon.py Notice how I have to externally call "python app.py " in order to make the test work ? I haven't found a way to integrate the app within the test function/module at all without errors flying widely. The problem with the above method (although works and passes) is that the no coverage data is picked up by the fact that "app.py" was executed and therefore parts (or all) of the Daemon Component (2) were actually covered. Anyone have any suggestions or workarounds here ? Thanks, cheers James 1. http://code.google.com/p/circuits/ 2. http://bitbucket.org/prologic/circuits-dev/src/1e4ebb87eafc/circuits/app/__init__.py#cl-20 -- -- "Problems are solved by method" From pontus.astrom at businessecurity.com Mon Feb 15 08:30:04 2010 From: pontus.astrom at businessecurity.com (=?UTF-8?B?UG9udHVzIMOFc3Ryw7Zt?=) Date: Mon, 15 Feb 2010 08:30:04 +0100 Subject: [py-dev] Acceptance testing with py.test In-Reply-To: <1266105425.9546.145.camel@localhost> References: <4B75587A.2060208@businessecurity.com> <20100212212347.GD6083@trillke.net> <1266105425.9546.145.camel@localhost> Message-ID: <4B78F7FC.3060600@businessecurity.com> Ronny Pfannschmidt wrote: > On Fri, 2010-02-12 at 22:23 +0100, holger krekel wrote: > > I have some basic ideas about structuring that kind of test. > > A) steped reporting, so each test reports the current step > B) collection of dependend test-items, having each test item as `step` > > A requires extending the py.test reporting (but might be easy) > B requires extending the py.test test execution > Could you just elaborate a bit on the above items and giving the rationale for each approach. I currently have some difficulty understanding what you mean. From holger at merlinux.eu Mon Feb 15 10:54:15 2010 From: holger at merlinux.eu (holger krekel) Date: Mon, 15 Feb 2010 10:54:15 +0100 Subject: [py-dev] RFC pytest metadata store plugin In-Reply-To: <1266134787.28016.11.camel@localhost> References: <1266134787.28016.11.camel@localhost> Message-ID: <20100215095415.GQ6083@trillke.net> On Sun, Feb 14, 2010 at 09:06 +0100, Ronny Pfannschmidt wrote: > it would basically be a key-value store for test items/fspath items > > it should provide 2 objects as funcargs/py.test namesspace items > > path_meta > store key-values for the current fspath > item_meta > store key-values for the current test item don't think funcargs are of primary interest for this but rather other plugins (like codecheckers) who want to useit to store information. It would also help to implement persistence across test sessions and features like "run last failing tests" without having to use "--looponfailing" from the 'python-xdist' plugin. > the need i have for such a feature is based in the speed issue of the > codecheckers plugin, in particular pep8 is slow, and should be skipped > in case of the file's mtime being less than the last run of the pep8 > check and had no issues > > issues i havent clearly tought about yet is how to store and where to > store > > options i currently can think of are: > > - xattr on the actual files/alternative streams > - a sqlite db IMHO neither - rather in a directory in the plain filesystem as this should work on any python and operating system. The main issue issues i see are - how to determine the caching-data directory - how to provide a minimal interface to access and manipulate it. cheers holger From holger at merlinux.eu Mon Feb 15 11:11:57 2010 From: holger at merlinux.eu (holger krekel) Date: Mon, 15 Feb 2010 11:11:57 +0100 Subject: [py-dev] Acceptance testing with py.test In-Reply-To: <4B78F7FC.3060600@businessecurity.com> References: <4B75587A.2060208@businessecurity.com> <20100212212347.GD6083@trillke.net> <1266105425.9546.145.camel@localhost> <4B78F7FC.3060600@businessecurity.com> Message-ID: <20100215101156.GS6083@trillke.net> On Mon, Feb 15, 2010 at 08:30 +0100, Pontus ?str?m wrote: > Ronny Pfannschmidt wrote: > > On Fri, 2010-02-12 at 22:23 +0100, holger krekel wrote: > > > > I have some basic ideas about structuring that kind of test. > > > > A) steped reporting, so each test reports the current step > > B) collection of dependend test-items, having each test item as `step` > > > > A requires extending the py.test reporting (but might be easy) > > B requires extending the py.test test execution > > > Could you just elaborate a bit on the above items and giving the > rationale for each approach. I currently have some difficulty > understanding what you mean. Regarding approach B i recommend to check out the fine docs that Ronny pointed to, in this file: http://bitbucket.org/aafshar/glashammer-testing/src/tip/glashammer/utils/testing .py Regarding approach A: currently test items are collected and executed and reported. They are the basic unit of testing and they are meant to be independent and isolated from each other although they may share and often do share fixture code. Domain-specific acceptance testing tests may run longer and they may involve "logical" steps that need to happen in a certain order. Mapping those steps to test items conflicts with the isolation above. One idea is to simply flexibilize reporting and signal "step" results to the terminal (or other) reporters. And otherwize keep the isolation. This is approach A. B works now, and i'd help to make A work if there are concrete use cases and no better solution. HTH, holger From holger at merlinux.eu Mon Feb 15 11:20:19 2010 From: holger at merlinux.eu (holger krekel) Date: Mon, 15 Feb 2010 11:20:19 +0100 Subject: [py-dev] Testing daemonize() (of sorts) In-Reply-To: References: Message-ID: <20100215102019.GT6083@trillke.net> Hi James, On Mon, Feb 15, 2010 at 00:48 +1000, James Mills wrote: > Hi Folks, > > Not sure how big this community here is, but at There are around 130 people subscribed here but i don't know how many are how deep into things :) > the suggestion of Ronny, here's a problem I'd like > to "throw" at you: > > Consider the following test for my library circuits (1): > > http://bitbucket.org/prologic/circuits-dev/src/tip/tests/app/test_daemon.py > > Notice how I have to externally call "python app.py " in order to > make the test work ? I haven't found a way to integrate the app within the > test function/module at all without errors flying widely. sidenote: i think you could make good use of the "tmpdir" funcarg here to avoid writing ".pid" files to random places. See "py.test --funcargs" for info. > The problem with the above method (although works and passes) is that > the no coverage data is picked up by the fact that "app.py" was executed > and therefore parts (or all) of the Daemon Component (2) were actually > covered. > > Anyone have any suggestions or workarounds here ? IMO this needs looking a) into how to merge coverage data and report it from multiple sources (probably pytest-coverage which you are working on could be tweaked) b) how to write a nice funcarg that helps running a python script with coverage configured accordingly. That funcarg should maybe live with that plugin. HTH, holger From Ronny.Pfannschmidt at gmx.de Mon Feb 15 11:38:50 2010 From: Ronny.Pfannschmidt at gmx.de (Ronny Pfannschmidt) Date: Mon, 15 Feb 2010 11:38:50 +0100 Subject: [py-dev] the seeming need of task-based build-tool properties in py.test Message-ID: <1266230330.18644.4.camel@localhost> hi, as needs like dependend tests, checking the need for test execution and the changes in distribution emerge, it seems like we are going tinto the direction of a task-based build tool where each test item is a 'task' that may depend on other tasks and has possible checked preconditions for being executed. i just wanted to throw that at the ml so people can discuss. -- Ronny From holger at merlinux.eu Mon Feb 15 12:01:28 2010 From: holger at merlinux.eu (holger krekel) Date: Mon, 15 Feb 2010 12:01:28 +0100 Subject: [py-dev] the seeming need of task-based build-tool properties in py.test In-Reply-To: <1266230330.18644.4.camel@localhost> References: <1266230330.18644.4.camel@localhost> Message-ID: <20100215110128.GV6083@trillke.net> Hey Ronny, On Mon, Feb 15, 2010 at 11:38 +0100, Ronny Pfannschmidt wrote: > as needs like dependend tests, checking the need for test execution and > the changes in distribution emerge, for everybodies information, this is partly a reference to IRC discussions. > it seems like we are going tinto the direction of a task-based build > tool where each test item is a 'task' that may depend on other tasks and > has possible checked preconditions for being executed. not sure i agree. I'd like to keep the base functionality and py.test tool simple and rather work on higher level goals in separate projects and also include support for nose and unittests there. There are various build approaches ('doit' is one interesting one recently mentioned here) and i'd like to leverage those. That being said, I see overlap on testing and deployment. Those who were at Pycon 2009 and EuroPython know that i regard it as natural to have (functional) testing and deployment of software converge in the future. After all deployment is a kind of a test usually involving humans interacting with the software. And the work done to prepare functional tests involves automatically installing software. It makes sense to me to re-use the same techniques (e.g. virtualenv, distutils installs, build-tools etc.) for preparing automated test runs and for deploying code. > i just wanted to throw that at the ml so people can discuss. heh, please try to not post too many mails about details or ideas here, though. After all it's more the execution of ideas which counts rather than dropping them :) cheers & all the best, holger From Adam.Schmalhofer at gmx.de Mon Feb 15 14:50:31 2010 From: Adam.Schmalhofer at gmx.de (Adam) Date: Mon, 15 Feb 2010 14:50:31 +0100 Subject: [py-dev] the seeming need of task-based build-tool properties in py.test In-Reply-To: <1266230330.18644.4.camel@localhost> References: <1266230330.18644.4.camel@localhost> Message-ID: <20100215145031.3adddf26@deepthought> Ronny Pfannschmidt wrote: > as needs like dependend tests, [...] > it seems like we are going tinto the direction of a task-based build > tool where each test item is a 'task' that may depend on other tasks and > has possible checked preconditions for being executed. A test dependency is IMO very different to a build dependency. With a build dependency the task can't be run until it's dependency have produced their product. If they are run in a wrong order, it produces a broken build. OTOH the test dependencies are only relevant for interpreting why a test failed. If (at least) one dependency failed, I don't care if the test passt or not. In which order the two were run is irrelevant. Sure, if I ran the dependency first, I can save time, but as (with the exception of looponfailing) '#failed test << #tests' it shouldn't make much of a difference). --Adam -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 828 bytes Desc: not available URL: From Ronny.Pfannschmidt at gmx.de Wed Feb 17 15:41:43 2010 From: Ronny.Pfannschmidt at gmx.de (Ronny Pfannschmidt) Date: Wed, 17 Feb 2010 15:41:43 +0100 Subject: [py-dev] the seeming need of task-based build-tool properties in py.test In-Reply-To: <20100215145031.3adddf26@deepthought> References: <1266230330.18644.4.camel@localhost> <20100215145031.3adddf26@deepthought> Message-ID: <1266417703.16084.20.camel@localhost> On Mon, 2010-02-15 at 14:50 +0100, Adam wrote: > Ronny Pfannschmidt wrote: > > > as needs like dependend tests, [...] > > it seems like we are going tinto the direction of a task-based build > > tool where each test item is a 'task' that may depend on other tasks and > > has possible checked preconditions for being executed. > > A test dependency is IMO very different to a build dependency. With a > build dependency the task can't be run until it's dependency have > produced their product. If they are run in a wrong order, it produces a > broken build. ordering issues indeed get important as soon as one creates test-items that depend on the order and execution of other test items this is already happening for glashammers acceptance-testsuite another part of the issue is tests that take long, but are not required to run every time - like for example code validators -- Ronny > > OTOH the test dependencies are only relevant for interpreting why a test > failed. If (at least) one dependency failed, I don't care if the test > passt or not. In which order the two were run is irrelevant. Sure, if I > ran the dependency first, I can save time, but as (with the exception of > looponfailing) '#failed test << #tests' it shouldn't make much of a > difference). > > --Adam From Adam.Schmalhofer at gmx.de Wed Feb 17 17:13:01 2010 From: Adam.Schmalhofer at gmx.de (Adam) Date: Wed, 17 Feb 2010 17:13:01 +0100 Subject: [py-dev] the seeming need of task-based build-tool properties in py.test In-Reply-To: <1266417703.16084.20.camel@localhost> References: <1266230330.18644.4.camel@localhost> <20100215145031.3adddf26@deepthought> <1266417703.16084.20.camel@localhost> Message-ID: <20100217171301.3f9bd618@deepthought> Ronny Pfannschmidt wrote: > ordering issues indeed get important as soon as one creates test-items > that depend on the order and execution of other test items > > this is already happening for glashammers acceptance-testsuite Personally, I consider this a broken test setup that really should be separated between a cached_setup and the tests. I wouldn't want to leave a global state around. However, I don't know hard this is for glashammers specific cases. And I guess it is a sort of boilerplate. > another part of the issue is tests that take long, > but are not required to run every time - like for example code > validators I consider this a separate issue (and would be cool to have). --Adam -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 828 bytes Desc: not available URL: From Ronny.Pfannschmidt at gmx.de Wed Feb 17 17:43:05 2010 From: Ronny.Pfannschmidt at gmx.de (Ronny Pfannschmidt) Date: Wed, 17 Feb 2010 17:43:05 +0100 Subject: [py-dev] the seeming need of task-based build-tool properties in py.test In-Reply-To: <20100217171301.3f9bd618@deepthought> References: <1266230330.18644.4.camel@localhost> <20100215145031.3adddf26@deepthought> <1266417703.16084.20.camel@localhost> <20100217171301.3f9bd618@deepthought> Message-ID: <1266424985.16084.27.camel@localhost> On Wed, 2010-02-17 at 17:13 +0100, Adam wrote: > Ronny Pfannschmidt wrote: > > > ordering issues indeed get important as soon as one creates test-items > > that depend on the order and execution of other test items > > > > this is already happening for glashammers acceptance-testsuite > its more or less breaking a big far acceptance test down in reportable steps the current solution isnt nice, but works in the current py.test, it will have to be adapted once we have a full decission on how to handle steps of acceptance tests (the ideas are still fuzzy) > Personally, I consider this a broken test setup that really should be > separated between a cached_setup and the tests. I wouldn't want to leave > a global state around. However, I don't know hard this is for > glashammers specific cases. And I guess it is a sort of boilerplate. > one of the inherent properties of those acceptance tests is, that a step in the middle ma fail without necessaryly breaking the rest, however the order is important the same would go for doctest chunks order dependce, failure indepence its generally a tricky area to deal with and i expect that we do it wrong a few times before something thats really acceptable emerges as the result of feedback -- Ronny > > another part of the issue is tests that take long, > > but are not required to run every time - like for example code > > validators > > I consider this a separate issue (and would be cool to have). > > --Adam