From nicoddemus at gmail.com Sun Mar 4 07:41:10 2018 From: nicoddemus at gmail.com (Bruno Oliveira) Date: Sun, 04 Mar 2018 12:41:10 +0000 Subject: [pytest-dev] Stickers Message-ID: Hi everyone, Does anyone still have the image we used to make the stickers we got last sprint? I'm planning to make some more to distribute in the next Python event in my city. :) Cheers, Bruno. -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at the-compiler.org Sun Mar 4 07:43:27 2018 From: me at the-compiler.org (Florian Bruhin) Date: Sun, 4 Mar 2018 13:43:27 +0100 Subject: [pytest-dev] Stickers In-Reply-To: References: Message-ID: <20180304124326.uf3skmet3lzlppa2@hooch.localdomain> Hey Bruno, On Sun, Mar 04, 2018 at 12:41:10PM +0000, Bruno Oliveira wrote: > Does anyone still have the image we used to make the stickers we got last > sprint? I'm planning to make some more to distribute in the next Python > event in my city. :) Some stuff is here: https://github.com/pytest-dev/pytest-design If the one you search isn't, it really should be :) Florian -- https://www.qutebrowser.org | me at the-compiler.org (Mail/XMPP) GPG: 916E B0C8 FD55 A072 | https://the-compiler.org/pubkey.asc I love long mails! | https://email.is-not-s.ms/ From nicoddemus at gmail.com Sun Mar 4 08:05:29 2018 From: nicoddemus at gmail.com (Bruno Oliveira) Date: Sun, 04 Mar 2018 13:05:29 +0000 Subject: [pytest-dev] Stickers In-Reply-To: <20180304124326.uf3skmet3lzlppa2@hooch.localdomain> References: <20180304124326.uf3skmet3lzlppa2@hooch.localdomain> Message-ID: On Sun, Mar 4, 2018 at 9:43 AM Florian Bruhin wrote: > Hey Bruno, > > On Sun, Mar 04, 2018 at 12:41:10PM +0000, Bruno Oliveira wrote: > > Does anyone still have the image we used to make the stickers we got last > > sprint? I'm planning to make some more to distribute in the next Python > > event in my city. :) > > Some stuff is here: https://github.com/pytest-dev/pytest-design > If the one you search isn't, it really should be :) > R?! Awesome, that's exactly what I was looking for. Thanks Florian! Cheers, Bruno -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicoddemus at gmail.com Mon Mar 5 17:44:18 2018 From: nicoddemus at gmail.com (Bruno Oliveira) Date: Mon, 05 Mar 2018 22:44:18 +0000 Subject: [pytest-dev] pytest 3.4.2 released Message-ID: Hi everyone, pytest 3.4.2 has just been released to PyPI! This is a bug-fix release, being a drop-in replacement. To upgrade:: pip install --upgrade pytest The full changelog is available at http://doc.pytest.org/en/latest/changelog.html. Thanks to all who contributed to this release, among them: * Allan Feldman * Bruno Oliveira * Florian Bruhin * Jason R. Coombs * Kyle Altendorf * Maik Figura * Ronny Pfannschmidt * codetriage-readme-bot * feuillemorte * joshm91 * mike Happy testing, The pytest Development Team -------------- next part -------------- An HTML attachment was scrubbed... URL: From opensource at ronnypfannschmidt.de Fri Mar 9 05:24:35 2018 From: opensource at ronnypfannschmidt.de (RonnyPfannschmidt) Date: Fri, 9 Mar 2018 11:24:35 +0100 Subject: [pytest-dev] preparing a breaking internal change - splitting Session into the node and the plugin Message-ID: <4b029c5a-4e58-4771-2763-bd466f14c02f@ronnypfannschmidt.de> Hi everyone, since a while now the fact that session is both a Node and a plugin has created some interesting issues (like the compatproperties that cant warn correct due to pluggy and fixture scanning) i'd like to elevate those issues by slitting the plugin part and the session part into different classes. while affecting other users should be next to impossible, its still a breaking change. as such i believe it should be prepared in turn with a major pytest release thats also removing more badness/slack (this also creates some unique issue, since the very structure i want to break appart preventssane deprecation warnings for some parts) i#d like to get some opinions on what others think of this -- Ronny From florian.schulze at gmx.net Fri Mar 9 08:43:03 2018 From: florian.schulze at gmx.net (Florian Schulze) Date: Fri, 09 Mar 2018 14:43:03 +0100 Subject: [pytest-dev] preparing a breaking internal change - splitting Session into the node and the plugin In-Reply-To: <4b029c5a-4e58-4771-2763-bd466f14c02f@ronnypfannschmidt.de> References: <4b029c5a-4e58-4771-2763-bd466f14c02f@ronnypfannschmidt.de> Message-ID: <9726F379-58B6-422B-BAFF-76F7205F3842@gmx.net> On 9 Mar 2018, at 11:24, RonnyPfannschmidt wrote: > Hi everyone, > > since a while now the fact that session is both a Node and a plugin > has > created some interesting issues > > (like the compatproperties that cant warn correct due to pluggy and > fixture scanning) > > i'd like to elevate those issues by slitting the plugin part and the > session part into different classes. > > while affecting other users should be next to impossible, its still a > breaking change. > > as such i believe it should be prepared in turn with a major pytest > release thats also removing more badness/slack > > (this also creates some unique issue, since the very structure i want > to > break appart preventssane deprecation warnings for some parts) > > > i#d like to get some opinions on what others think of this I wouldn't take a major release to cram in as much changes as possible. IMHO it's fine to have a major release for just one breaking change. That way it's easier to manage possible fallout and build confidence that major releases aren't *that* bad. The possibility of proper deprecations trumps the wish to clean up as much as possible in one go. Frequent small steps are better than big steps every once in a while. The goal will be the same, but with less disruption. Regards, Florian Schulze From opensource at ronnypfannschmidt.de Fri Mar 9 08:46:17 2018 From: opensource at ronnypfannschmidt.de (RonnyPfannschmidt) Date: Fri, 9 Mar 2018 14:46:17 +0100 Subject: [pytest-dev] preparing a breaking internal change - splitting Session into the node and the plugin In-Reply-To: <9726F379-58B6-422B-BAFF-76F7205F3842@gmx.net> References: <4b029c5a-4e58-4771-2763-bd466f14c02f@ronnypfannschmidt.de> <9726F379-58B6-422B-BAFF-76F7205F3842@gmx.net> Message-ID: Am 09.03.2018 um 14:43 schrieb Florian Schulze: > On 9 Mar 2018, at 11:24, RonnyPfannschmidt wrote: > >> Hi everyone, >> >> since a while now the fact that session is both a Node and a plugin has >> created some interesting issues >> >> (like the compatproperties that cant warn correct due to pluggy and >> fixture scanning) >> >> i'd like to elevate those issues by slitting the plugin part and the >> session part into different classes. >> >> while affecting other users should be next to impossible, its still a >> breaking change. >> >> as such i believe it should be prepared in turn with a major pytest >> release thats also removing more badness/slack >> >> (this also creates some unique issue, since the very structure i want to >> break appart preventssane deprecation warnings for some parts) >> >> >> i#d like to get some opinions on what others think of this > > I wouldn't take a major release to cram in as much changes as possible. > IMHO it's fine to have a major release for just one breaking change. > That way it's easier to manage possible fallout and build confidence > that major releases aren't *that* bad. The possibility of proper > deprecations trumps the wish to clean up as much as possible in one go. > Frequent small steps are better than big steps every once in a while. > The goal will be the same, but with less disruption. really good point, this reminds me of the way setuptools handles things - each major release only handles one singular point, which in turn ensures smooth transitions thanks for pulling the plug on the desire to just cram many things in we should still make some kind of plan, since we have all those RemovedInPytest40 warnings around -- Ronny > > Regards, > Florian Schulze From ringo.de.smet at ontoforce.com Mon Mar 12 12:08:11 2018 From: ringo.de.smet at ontoforce.com (Ringo De Smet) Date: Mon, 12 Mar 2018 17:08:11 +0100 Subject: [pytest-dev] Not running standard pytest collector for file spec/*_spec.py Message-ID: Hello, I am in the process of implementing a pytest plugin to run mamba tests as a pytest plugin. Running pytest without any arguments works correctly: pytest picks up tests using the python and unittest plugins from the tests folder and picks up the mamba tests from the spec folder. The problem starts when running pytest with a single spec file as argument: $ pytest spec/action_base_spec.py ======================================================================= test session starts ======================================================================== platform darwin -- Python 3.6.4, pytest-3.4.1, py-1.5.2, pluggy-0.6.0 rootdir: /Users/ringods/Projects/ontoforce/metis/execution_layer, inifile: plugins: mamba-1.0.0 collected 6 items / 1 errors ============================================================================== ERRORS ============================================================================== ____________________________________________________________ ERROR collecting spec/action_base_spec.py _____________________________________________________________ spec/action_base_spec.py:20: in with description('ActionBase') as self: E AttributeError: __enter__ !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 errors during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! ===================================================================== 1 error in 0.15 seconds ====================================================================== This comes from the python plugin in pytest. When running with `-p no:python`, this command succeeds. Why is the python plugin picking up this file, even when it doesn't match the regexes `test_*.py` or `*_test.py`? Ringo -------------- next part -------------- An HTML attachment was scrubbed... URL: From opensource at ronnypfannschmidt.de Mon Mar 12 12:44:01 2018 From: opensource at ronnypfannschmidt.de (RonnyPfannschmidt) Date: Mon, 12 Mar 2018 17:44:01 +0100 Subject: [pytest-dev] Not running standard pytest collector for file spec/*_spec.py In-Reply-To: References: Message-ID: <216972f1-e519-6106-89f6-e7e0ae5fe975@ronnypfannschmidt.de> Hi Ringo, if pytest is given a explicit filename, it just goes for the file, even if it doesn't match the glob for python files when searching automatically -- Ronny Am 12.03.2018 um 17:08 schrieb Ringo De Smet: > Hello, > > I am in the process of implementing a pytest plugin to run mamba tests > as a pytest plugin. Running pytest without any arguments works > correctly: pytest picks up tests using the python and unittest plugins > from the tests folder and picks up the mamba tests from the spec folder. > > The problem starts when running pytest with a single spec file as > argument: > > $ pytest spec/action_base_spec.py > ======================================================================= > test session starts > ======================================================================== > platform darwin -- Python 3.6.4, pytest-3.4.1, py-1.5.2, pluggy-0.6.0 > rootdir: /Users/ringods/Projects/ontoforce/metis/execution_layer, inifile: > plugins: mamba-1.0.0 > collected 6 items / 1 errors > > ============================================================================== > ERRORS > ============================================================================== > ____________________________________________________________ ERROR > collecting spec/action_base_spec.py > _____________________________________________________________ > spec/action_base_spec.py:20: in > ? ? with description('ActionBase') as self: > E ? AttributeError: __enter__ > !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! > Interrupted: 1 errors during collection > !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! > ===================================================================== > 1 error in 0.15 seconds > ====================================================================== > > This comes from the python plugin in pytest. When running with `-p > no:python`, this command succeeds. > > Why is the python plugin picking up this file, even when it doesn't > match the regexes `test_*.py` or `*_test.py`? > > Ringo > > > > _______________________________________________ > pytest-dev mailing list > pytest-dev at python.org > https://mail.python.org/mailman/listinfo/pytest-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicoddemus at gmail.com Mon Mar 12 15:10:12 2018 From: nicoddemus at gmail.com (Bruno Oliveira) Date: Mon, 12 Mar 2018 19:10:12 +0000 Subject: [pytest-dev] Not running standard pytest collector for file spec/*_spec.py In-Reply-To: <216972f1-e519-6106-89f6-e7e0ae5fe975@ronnypfannschmidt.de> References: <216972f1-e519-6106-89f6-e7e0ae5fe975@ronnypfannschmidt.de> Message-ID: Hi Ringo, It is as Ronny said, you can see the code responsible for that here: https://github.com/pytest-dev/pytest/blob/master/_pytest/python.py#L162 When the file has a `.py` extension and is one of the "inipaths" (paths given explicitly in the command line), then the `python` plugin will collect that file anyway. You can override this by implementing your own `pytest_collect_file` and return non-`None` when a `.py` file inside the specs directory is passed in the command-line. Cheers, Bruno. On Mon, Mar 12, 2018 at 1:44 PM RonnyPfannschmidt < opensource at ronnypfannschmidt.de> wrote: > Hi Ringo, > > if pytest is given a explicit filename, it just goes for the file, > > even if it doesn't match the glob for python files when searching > automatically > -- Ronny > > > Am 12.03.2018 um 17:08 schrieb Ringo De Smet: > > Hello, > > I am in the process of implementing a pytest plugin to run mamba tests as > a pytest plugin. Running pytest without any arguments works correctly: > pytest picks up tests using the python and unittest plugins from the tests > folder and picks up the mamba tests from the spec folder. > > The problem starts when running pytest with a single spec file as argument: > > $ pytest spec/action_base_spec.py > ======================================================================= > test session starts > ======================================================================== > platform darwin -- Python 3.6.4, pytest-3.4.1, py-1.5.2, pluggy-0.6.0 > rootdir: /Users/ringods/Projects/ontoforce/metis/execution_layer, inifile: > plugins: mamba-1.0.0 > collected 6 items / 1 errors > > ============================================================================== > ERRORS > ============================================================================== > ____________________________________________________________ ERROR > collecting spec/action_base_spec.py > _____________________________________________________________ > spec/action_base_spec.py:20: in > with description('ActionBase') as self: > E AttributeError: __enter__ > !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: > 1 errors during collection > !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! > ===================================================================== 1 > error in 0.15 seconds > ====================================================================== > > This comes from the python plugin in pytest. When running with `-p > no:python`, this command succeeds. > > Why is the python plugin picking up this file, even when it doesn't match > the regexes `test_*.py` or `*_test.py`? > > Ringo > > > > _______________________________________________ > pytest-dev mailing listpytest-dev at python.orghttps://mail.python.org/mailman/listinfo/pytest-dev > > > _______________________________________________ > pytest-dev mailing list > pytest-dev at python.org > https://mail.python.org/mailman/listinfo/pytest-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From patrickdepinguin at gmail.com Mon Mar 12 16:47:45 2018 From: patrickdepinguin at gmail.com (Thomas De Schampheleire) Date: Mon, 12 Mar 2018 21:47:45 +0100 Subject: [pytest-dev] Parametrized autouse fixtures Message-ID: Hello, The Kallithea project is a repository hosting and review system, currently supporting git and hg. We currently have some test cases that need to be run for these two version control systems. Previously this was done with some Python magic, which was now simplified and made more explicit in commit: https://kallithea-scm.org/repos/kallithea/changeset/45a281a0f36ff59ffaa4aa0107fabfc1a6310251 but I assume it can be made more automatic with pytest fixtures. I was thinking to use an autouse, parametrized fixture to set the backend to 'hg' and 'git' respectively. But I can't make it work. Here is the change I did on top of the mentioned commit: diff --git a/kallithea/tests/vcs/base.py b/kallithea/tests/vcs/base.py --- a/kallithea/tests/vcs/base.py +++ b/kallithea/tests/vcs/base.py @@ -5,6 +5,7 @@ InMemoryChangeset class is working prope import os import time import datetime +import pytest from kallithea.lib import vcs from kallithea.lib.vcs.nodes import FileNode @@ -27,6 +28,11 @@ class _BackendTestMixin(object): """ recreate_repo_per_test = True + @pytest.fixture(autouse=True, + params=['hg', 'git']) + def set_backend_alias(cls, request): + cls.backend_alias = request.param + @classmethod def get_backend(cls): return vcs.get_backend(cls.backend_alias) diff --git a/kallithea/tests/vcs/test_archives.py b/kallithea/tests/vcs/test_archives.py --- a/kallithea/tests/vcs/test_archives.py +++ b/kallithea/tests/vcs/test_archives.py @@ -14,7 +14,7 @@ from kallithea.tests.vcs.base import _Ba from kallithea.tests.vcs.conf import TESTS_TMP_PATH -class ArchivesTestCaseMixin(_BackendTestMixin): +class TestArchivesTestCaseMixin(_BackendTestMixin): @classmethod def _get_commits(cls): @@ -95,11 +95,3 @@ class ArchivesTestCaseMixin(_BackendTest def test_archive_prefix_with_leading_slash(self): with pytest.raises(VCSError): self.tip.fill_archive(prefix='/any') - - -class TestGitArchive(ArchivesTestCaseMixin): - backend_alias = 'git' - - -class TestHgArchive(ArchivesTestCaseMixin): - backend_alias = 'hg' but when running this I get: $ pytest kallithea/tests/vcs/test_archives.py Test session starts (platform: linux2, Python 2.7.14, pytest 3.4.2, pytest-sugar 0.9.1) benchmark: 3.1.1 (defaults: timer=time.time disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000) rootdir: /home/tdescham/repo/contrib/kallithea/kallithea-review, inifile: pytest.ini plugins: sugar-0.9.1, localserver-0.4.1, benchmark-3.1.1 ???????????????????????????????????????????????????????????????????????? ERROR at setup of TestArchivesTestCaseMixin.test_archive_zip[hg] ????????????????????????????????????????????????????????????????????????? kallithea/tests/vcs/base.py:70: in setup_class Backend = cls.get_backend() kallithea/tests/vcs/base.py:38: in get_backend return vcs.get_backend(cls.backend_alias) E AttributeError: type object 'TestArchivesTestCaseMixin' has no attribute 'backend_alias' 7% ? ???????????????????????????????????????????????????????????????????????? ERROR at setup of TestArchivesTestCaseMixin.test_archive_zip[git] ???????????????????????????????????????????????????????????????????????? kallithea/tests/vcs/base.py:70: in setup_class Backend = cls.get_backend() kallithea/tests/vcs/base.py:38: in get_backend return vcs.get_backend(cls.backend_alias) E AttributeError: type object 'TestArchivesTestCaseMixin' has no attribute 'backend_alias' 14% ?? [..] So the parametrization seems to work because each test is run twice, but I can't seem to find the right way to set the backend_alias, such that a later call from a classmethod works correctly. I found some possibly useful magic by nicoddemus but could not make that work either: https://github.com/pytest-dev/pytest/issues/2618#issuecomment-318519875 Is it possible to achieve what I want, without changing each test method to take a fixture explicitly? Thanks, Thomas From nicoddemus at gmail.com Mon Mar 12 17:16:38 2018 From: nicoddemus at gmail.com (Bruno Oliveira) Date: Mon, 12 Mar 2018 21:16:38 +0000 Subject: [pytest-dev] Parametrized autouse fixtures In-Reply-To: References: Message-ID: Hi Thomas, It seems the problem is that you are mixing xunit-style fixtures (by implementing a setup_class classmethod) and pytest-style fixtures: they currently don?t play well together, setup_* methods execute before all other fixtures (see #517 ). There are plans to fix that, but this scheduled for 3.6 only. But there?s an easy workaround, just change your ?setup_class? method into a proper fixture instead: class TestArchivesTestCaseMixin: @classmethod @pytest.fixture(autouse=True, params=['hg', 'git'], scope='class') def _configure_backend(cls, request): backend_alias = request.param Backend = vcs.get_backend(backend_alias) ... Hope that helps. Cheers, Bruno. ? On Mon, Mar 12, 2018 at 5:48 PM Thomas De Schampheleire < patrickdepinguin at gmail.com> wrote: > Hello, > > The Kallithea project is a repository hosting and review system, > currently supporting git and hg. We currently have some test cases > that need to be run for these two version control systems. > > Previously this was done with some Python magic, which was now > simplified and made more explicit in commit: > > https://kallithea-scm.org/repos/kallithea/changeset/45a281a0f36ff59ffaa4aa0107fabfc1a6310251 > > but I assume it can be made more automatic with pytest fixtures. I was > thinking to use an autouse, parametrized fixture to set the backend to > 'hg' and 'git' respectively. But I can't make it work. > > Here is the change I did on top of the mentioned commit: > > diff --git a/kallithea/tests/vcs/base.py b/kallithea/tests/vcs/base.py > --- a/kallithea/tests/vcs/base.py > +++ b/kallithea/tests/vcs/base.py > @@ -5,6 +5,7 @@ InMemoryChangeset class is working prope > import os > import time > import datetime > +import pytest > > from kallithea.lib import vcs > from kallithea.lib.vcs.nodes import FileNode > @@ -27,6 +28,11 @@ class _BackendTestMixin(object): > """ > recreate_repo_per_test = True > > + @pytest.fixture(autouse=True, > + params=['hg', 'git']) > + def set_backend_alias(cls, request): > + cls.backend_alias = request.param > + > @classmethod > def get_backend(cls): > return vcs.get_backend(cls.backend_alias) > diff --git a/kallithea/tests/vcs/test_archives.py > b/kallithea/tests/vcs/test_archives.py > --- a/kallithea/tests/vcs/test_archives.py > +++ b/kallithea/tests/vcs/test_archives.py > @@ -14,7 +14,7 @@ from kallithea.tests.vcs.base import _Ba > from kallithea.tests.vcs.conf import TESTS_TMP_PATH > > > -class ArchivesTestCaseMixin(_BackendTestMixin): > +class TestArchivesTestCaseMixin(_BackendTestMixin): > > @classmethod > def _get_commits(cls): > @@ -95,11 +95,3 @@ class ArchivesTestCaseMixin(_BackendTest > def test_archive_prefix_with_leading_slash(self): > with pytest.raises(VCSError): > self.tip.fill_archive(prefix='/any') > - > - > -class TestGitArchive(ArchivesTestCaseMixin): > - backend_alias = 'git' > - > - > -class TestHgArchive(ArchivesTestCaseMixin): > - backend_alias = 'hg' > > > > but when running this I get: > > $ pytest kallithea/tests/vcs/test_archives.py > Test session starts (platform: linux2, Python 2.7.14, pytest 3.4.2, > pytest-sugar 0.9.1) > benchmark: 3.1.1 (defaults: timer=time.time disable_gc=False > min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 > warmup=False warmup_iterations=100000) > rootdir: /home/tdescham/repo/contrib/kallithea/kallithea-review, > inifile: pytest.ini > plugins: sugar-0.9.1, localserver-0.4.1, benchmark-3.1.1 > > > ???????????????????????????????????????????????????????????????????????? > ERROR at setup of TestArchivesTestCaseMixin.test_archive_zip[hg] > ????????????????????????????????????????????????????????????????????????? > kallithea/tests/vcs/base.py:70: in setup_class > Backend = cls.get_backend() > kallithea/tests/vcs/base.py:38: in get_backend > return vcs.get_backend(cls.backend_alias) > E AttributeError: type object 'TestArchivesTestCaseMixin' has no > attribute 'backend_alias' > > > 7% ? > > ???????????????????????????????????????????????????????????????????????? > ERROR at setup of TestArchivesTestCaseMixin.test_archive_zip[git] > ???????????????????????????????????????????????????????????????????????? > kallithea/tests/vcs/base.py:70: in setup_class > Backend = cls.get_backend() > kallithea/tests/vcs/base.py:38: in get_backend > return vcs.get_backend(cls.backend_alias) > E AttributeError: type object 'TestArchivesTestCaseMixin' has no > attribute 'backend_alias' > > > 14% ?? > > [..] > > > > So the parametrization seems to work because each test is run twice, > but I can't seem to find the right way to set the backend_alias, such > that a later call from a classmethod works correctly. > > I found some possibly useful magic by nicoddemus but could not make > that work either: > https://github.com/pytest-dev/pytest/issues/2618#issuecomment-318519875 > > Is it possible to achieve what I want, without changing each test > method to take a fixture explicitly? > > Thanks, > Thomas > _______________________________________________ > pytest-dev mailing list > pytest-dev at python.org > https://mail.python.org/mailman/listinfo/pytest-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ringo.de.smet at ontoforce.com Tue Mar 13 03:39:54 2018 From: ringo.de.smet at ontoforce.com (Ringo De Smet) Date: Tue, 13 Mar 2018 08:39:54 +0100 Subject: [pytest-dev] Not running standard pytest collector for file spec/*_spec.py In-Reply-To: References: <216972f1-e519-6106-89f6-e7e0ae5fe975@ronnypfannschmidt.de> Message-ID: Ronny, Bruno, On Mon, Mar 12, 2018 at 8:10 PM, Bruno Oliveira wrote: > Hi Ringo, > > It is as Ronny said, you can see the code responsible for that here: > > https://github.com/pytest-dev/pytest/blob/master/_pytest/python.py#L162 > > When the file has a `.py` extension and is one of the "inipaths" (paths > given explicitly in the command line), then the `python` plugin will > collect that file anyway. > That's a pitty. Given the pluggability of pytest, each plugin could have a way to collect files and offer test suites back to pytest. Isn't there a way to specify one of my spec tests without the wrong plugin(s) picking up this file? > > You can override this by implementing your own `pytest_collect_file` and > return non-`None` when a `.py` file inside the specs directory is passed in > the command-line. > > Bruno, my plugin is collecting the file specified on the command line correctly, but still the python plugin tries to run it too. That's where it goes wrong. Ringo -------------- next part -------------- An HTML attachment was scrubbed... URL: From ringo.de.smet at ontoforce.com Tue Mar 13 05:46:44 2018 From: ringo.de.smet at ontoforce.com (Ringo De Smet) Date: Tue, 13 Mar 2018 10:46:44 +0100 Subject: [pytest-dev] No traceback filtering with __tracebackhide__ attribute using custom pytest plugin Message-ID: Hello, Here I am again with a problem regarding my custom pytest plugin to run mamba tests. In the mamba tests, I am using the the expects library. I patched the library locally and added the attribute __tracebackhide__ = True to the methods marked here: https://github.com/jaimegildesagredo/expects/blob/12bc9501b75b89a9d8b9916ee6da5ca318e72145/expects/expectations.py#L10-L19 When using the patched expects library in a regular unittest, the stacktrace is filtered as documented. tests/test_action_base.py:72: in test_getoption_ok expect(rs).to(equal('value2')) E AssertionError: E expected: 'value' to equal 'value2' But when using the library in my mamba tests, I get a full stack trace. ./../../../.pyenv/versions/3.6.4/envs/metis/lib/python3.6/site-packages/_pytest/runner.py:192: in __init__ self.result = func() ../../../../.pyenv/versions/3.6.4/envs/metis/lib/python3.6/site-packages/_pytest/runner.py:178: in return CallInfo(lambda: ihook(item=item, **kwds), when=when) ../../../../.pyenv/versions/3.6.4/envs/metis/lib/python3.6/site-packages/pluggy/__init__.py:617: in __call__ return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs) ../../../../.pyenv/versions/3.6.4/envs/metis/lib/python3.6/site-packages/pluggy/__init__.py:222: in _hookexec return self._inner_hookexec(hook, methods, kwargs) ../../../../.pyenv/versions/3.6.4/envs/metis/lib/python3.6/site-packages/pluggy/__init__.py:216: in firstresult=hook.spec_opts.get('firstresult'), ../../../../.pyenv/versions/3.6.4/envs/metis/lib/python3.6/site-packages/pluggy/callers.py:201: in _multicall return outcome.get_result() ../../../../.pyenv/versions/3.6.4/envs/metis/lib/python3.6/site-packages/pluggy/callers.py:76: in get_result raise ex[1].with_traceback(ex[2]) ../../../../.pyenv/versions/3.6.4/envs/metis/lib/python3.6/site-packages/pluggy/callers.py:180: in _multicall res = hook_impl.function(*args) ../../../../.pyenv/versions/3.6.4/envs/metis/lib/python3.6/site-packages/_pytest/runner.py:109: in pytest_runtest_call item.runtest() python-mamba/pytest_mamba/plugin.py:134: in runtest raise mamba_error.exception ../../../../.pyenv/versions/3.6.4/envs/metis/lib/python3.6/site-packages/mamba/example.py:43: in _execute_test self.test(execution_context) spec/action_base_spec.py:44: in 00000007__it is ok-- expect(rs).to(equal('value2')) ../../../../.pyenv/versions/3.6.4/envs/metis/lib/python3.6/site-packages/expects/expectations.py:22: in to self._assert(matcher) ../../../../.pyenv/versions/3.6.4/envs/metis/lib/python3.6/site-packages/expects/expectations.py:29: in _assert raise AssertionError(self._failure_message(matcher, reasons)) E AssertionError: E expected: 'value' to equal 'value2' Am I right in saying that the pytest_runtest_makereport hook from runner.py is the one calling into the filtering of the traceback and using the __tracebackhide__ attribute? Why isn't this done for my custom test suite? Ringo -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicoddemus at gmail.com Tue Mar 13 06:32:34 2018 From: nicoddemus at gmail.com (Bruno Oliveira) Date: Tue, 13 Mar 2018 10:32:34 +0000 Subject: [pytest-dev] Not running standard pytest collector for file spec/*_spec.py In-Reply-To: References: <216972f1-e519-6106-89f6-e7e0ae5fe975@ronnypfannschmidt.de> Message-ID: Hi Ringo, On Tue, Mar 13, 2018 at 4:40 AM Ringo De Smet wrote: > Ronny, Bruno, > > On Mon, Mar 12, 2018 at 8:10 PM, Bruno Oliveira > wrote: > >> Hi Ringo, >> >> It is as Ronny said, you can see the code responsible for that here: >> >> https://github.com/pytest-dev/pytest/blob/master/_pytest/python.py#L162 >> >> When the file has a `.py` extension and is one of the "inipaths" (paths >> given explicitly in the command line), then the `python` plugin will >> collect that file anyway. >> > > That's a pitty. Given the pluggability of pytest, each plugin could have a > way to collect files and offer test suites back to pytest. Isn't there a > way to specify one of my spec tests without the wrong plugin(s) picking up > this file? > The only way I can think of right now is to pass `-p no:python` in the command line to explicitly disable the Python plugin, as you mention. You can override this by implementing your own `pytest_collect_file` and >> return non-`None` when a `.py` file inside the specs directory is passed in >> the command-line. >> >> > Bruno, my plugin is collecting the file specified on the command line > correctly, but still the python plugin tries to run it too. That's where it > goes wrong. > Oh my bad, indeed all return values of `pytest_collect_file` are processed. This is by design, that allows `--doctest-modules` to process docstrings in test files (for example). But looking at your original error more closely: ``` spec/action_base_spec.py:20: in with description('ActionBase') as self: E AttributeError: __enter__ ``` It is not clear to me why this is breaking because of the python plugin; can you share `action_base_spec.py` and your plugin code? Cheers, Bruno. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ringo.de.smet at ontoforce.com Tue Mar 13 06:49:39 2018 From: ringo.de.smet at ontoforce.com (Ringo De Smet) Date: Tue, 13 Mar 2018 11:49:39 +0100 Subject: [pytest-dev] Not running standard pytest collector for file spec/*_spec.py In-Reply-To: References: <216972f1-e519-6106-89f6-e7e0ae5fe975@ronnypfannschmidt.de> Message-ID: Bruno, On Tue, Mar 13, 2018 at 11:32 AM, Bruno Oliveira wrote: > > But looking at your original error more closely: > > ``` > spec/action_base_spec.py:20: in > with description('ActionBase') as self: > E AttributeError: __enter__ > ``` > > It is not clear to me why this is breaking because of the python plugin; > can you share `action_base_spec.py` and your plugin code? > > Not for the moment. I have a request pending with the management team if it is OK for them to release this plugin as OpenSource. I understand why it fails though. When you try to run this as a regular pytest/unittest, the `with` block is being interpreted. A with block requires an __enter__ and __exit__ definition. Disabling the python plugin prevents this interpretation. Ringo -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicoddemus at gmail.com Tue Mar 13 06:56:58 2018 From: nicoddemus at gmail.com (Bruno Oliveira) Date: Tue, 13 Mar 2018 10:56:58 +0000 Subject: [pytest-dev] Not running standard pytest collector for file spec/*_spec.py In-Reply-To: References: <216972f1-e519-6106-89f6-e7e0ae5fe975@ronnypfannschmidt.de> Message-ID: Ringo, On Tue, Mar 13, 2018 at 7:49 AM Ringo De Smet wrote: > Bruno, > > On Tue, Mar 13, 2018 at 11:32 AM, Bruno Oliveira > wrote: > >> >> But looking at your original error more closely: >> >> ``` >> spec/action_base_spec.py:20: in >> with description('ActionBase') as self: >> E AttributeError: __enter__ >> ``` >> >> It is not clear to me why this is breaking because of the python plugin; >> can you share `action_base_spec.py` and your plugin code? >> >> > Not for the moment. I have a request pending with the management team if > it is OK for them to release this plugin as OpenSource. > > I understand why it fails though. When you try to run this as a regular > pytest/unittest, the `with` block is being interpreted. A with block > requires an __enter__ and __exit__ definition. Disabling the python plugin > prevents this interpretation. > Hmm but that's how Python (the language, not the plugin) interprets `with` blocks, it is not clear why the internal plugin would have anything to do with that. Well, it is hard to tell without looking at the code. Sorry I be of more help. Cheers, Bruno. -------------- next part -------------- An HTML attachment was scrubbed... URL: From flub at devork.be Tue Mar 13 17:24:19 2018 From: flub at devork.be (Floris Bruynooghe) Date: Tue, 13 Mar 2018 22:24:19 +0100 Subject: [pytest-dev] Adding intro talk/slides to pytest-dev repos and licensing Message-ID: Hi all, Ronny was interested in updating/adapting a pytest-intro talk I've given at a meetup sometime before [0]. He suggested to add it in a repo under the pytest-dev organisation, which seems like a fine idea. Why not all re-use slides if they're useful. However this also raises the question of what license to use. Pytest currently use MIT for everything but I'm not sure how suitable this is for non-code. After looking at Creative Commons, which seems like the default license for this sort of things, we came to think the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) [1] would be a reasonable choice for this. Personally I wouldn't mind something closer to MIT but it seems Creative Commons doesn't provide something like this. So my questions to you all are: 1. Do you think having a talk/slides in a repo under the pytest-dev organisation is a good idea? 2. Do you think this is a suitable license for a talk? Do you have any better suggestion? 3. Are we fine with putting CC BY-SA 4.0 licensed content in the pytest-dev organisation's repos? Cheers, Floris [0] http://devork.be/talks/pytest-intro/talk.html [1] https://creativecommons.org/licenses/by-sa/4.0/ From patrickdepinguin at gmail.com Tue Mar 13 18:05:02 2018 From: patrickdepinguin at gmail.com (Thomas De Schampheleire) Date: Tue, 13 Mar 2018 23:05:02 +0100 Subject: [pytest-dev] Parametrized autouse fixtures In-Reply-To: References: Message-ID: Thanks a lot, this did help me very well! /Thomas 2018-03-12 22:16 GMT+01:00 Bruno Oliveira : > Hi Thomas, > > It seems the problem is that you are mixing xunit-style fixtures (by > implementing a setup_class classmethod) and pytest-style fixtures: they > currently don?t play well together, setup_* methods execute before all other > fixtures (see #517). There are plans to fix that, but this scheduled for 3.6 > only. > > But there?s an easy workaround, just change your ?setup_class? method into a > proper fixture instead: > > class TestArchivesTestCaseMixin: > > @classmethod > @pytest.fixture(autouse=True, params=['hg', 'git'], scope='class') > def _configure_backend(cls, request): > backend_alias = request.param > Backend = vcs.get_backend(backend_alias) > ... > > Hope that helps. > > Cheers, > Bruno. > > > On Mon, Mar 12, 2018 at 5:48 PM Thomas De Schampheleire > wrote: >> >> Hello, >> >> The Kallithea project is a repository hosting and review system, >> currently supporting git and hg. We currently have some test cases >> that need to be run for these two version control systems. >> >> Previously this was done with some Python magic, which was now >> simplified and made more explicit in commit: >> >> https://kallithea-scm.org/repos/kallithea/changeset/45a281a0f36ff59ffaa4aa0107fabfc1a6310251 >> >> but I assume it can be made more automatic with pytest fixtures. I was >> thinking to use an autouse, parametrized fixture to set the backend to >> 'hg' and 'git' respectively. But I can't make it work. >> >> Here is the change I did on top of the mentioned commit: >> >> diff --git a/kallithea/tests/vcs/base.py b/kallithea/tests/vcs/base.py >> --- a/kallithea/tests/vcs/base.py >> +++ b/kallithea/tests/vcs/base.py >> @@ -5,6 +5,7 @@ InMemoryChangeset class is working prope >> import os >> import time >> import datetime >> +import pytest >> >> from kallithea.lib import vcs >> from kallithea.lib.vcs.nodes import FileNode >> @@ -27,6 +28,11 @@ class _BackendTestMixin(object): >> """ >> recreate_repo_per_test = True >> >> + @pytest.fixture(autouse=True, >> + params=['hg', 'git']) >> + def set_backend_alias(cls, request): >> + cls.backend_alias = request.param >> + >> @classmethod >> def get_backend(cls): >> return vcs.get_backend(cls.backend_alias) >> diff --git a/kallithea/tests/vcs/test_archives.py >> b/kallithea/tests/vcs/test_archives.py >> --- a/kallithea/tests/vcs/test_archives.py >> +++ b/kallithea/tests/vcs/test_archives.py >> @@ -14,7 +14,7 @@ from kallithea.tests.vcs.base import _Ba >> from kallithea.tests.vcs.conf import TESTS_TMP_PATH >> >> >> -class ArchivesTestCaseMixin(_BackendTestMixin): >> +class TestArchivesTestCaseMixin(_BackendTestMixin): >> >> @classmethod >> def _get_commits(cls): >> @@ -95,11 +95,3 @@ class ArchivesTestCaseMixin(_BackendTest >> def test_archive_prefix_with_leading_slash(self): >> with pytest.raises(VCSError): >> self.tip.fill_archive(prefix='/any') >> - >> - >> -class TestGitArchive(ArchivesTestCaseMixin): >> - backend_alias = 'git' >> - >> - >> -class TestHgArchive(ArchivesTestCaseMixin): >> - backend_alias = 'hg' >> >> >> >> but when running this I get: >> >> $ pytest kallithea/tests/vcs/test_archives.py >> Test session starts (platform: linux2, Python 2.7.14, pytest 3.4.2, >> pytest-sugar 0.9.1) >> benchmark: 3.1.1 (defaults: timer=time.time disable_gc=False >> min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 >> warmup=False warmup_iterations=100000) >> rootdir: /home/tdescham/repo/contrib/kallithea/kallithea-review, >> inifile: pytest.ini >> plugins: sugar-0.9.1, localserver-0.4.1, benchmark-3.1.1 >> >> >> ???????????????????????????????????????????????????????????????????????? >> ERROR at setup of TestArchivesTestCaseMixin.test_archive_zip[hg] >> ????????????????????????????????????????????????????????????????????????? >> kallithea/tests/vcs/base.py:70: in setup_class >> Backend = cls.get_backend() >> kallithea/tests/vcs/base.py:38: in get_backend >> return vcs.get_backend(cls.backend_alias) >> E AttributeError: type object 'TestArchivesTestCaseMixin' has no >> attribute 'backend_alias' >> >> >> 7% ? >> >> ???????????????????????????????????????????????????????????????????????? >> ERROR at setup of TestArchivesTestCaseMixin.test_archive_zip[git] >> ???????????????????????????????????????????????????????????????????????? >> kallithea/tests/vcs/base.py:70: in setup_class >> Backend = cls.get_backend() >> kallithea/tests/vcs/base.py:38: in get_backend >> return vcs.get_backend(cls.backend_alias) >> E AttributeError: type object 'TestArchivesTestCaseMixin' has no >> attribute 'backend_alias' >> >> >> 14% ?? >> >> [..] >> >> >> >> So the parametrization seems to work because each test is run twice, >> but I can't seem to find the right way to set the backend_alias, such >> that a later call from a classmethod works correctly. >> >> I found some possibly useful magic by nicoddemus but could not make >> that work either: >> https://github.com/pytest-dev/pytest/issues/2618#issuecomment-318519875 >> >> Is it possible to achieve what I want, without changing each test >> method to take a fixture explicitly? >> >> Thanks, >> Thomas >> _______________________________________________ >> pytest-dev mailing list >> pytest-dev at python.org >> https://mail.python.org/mailman/listinfo/pytest-dev From opensource at ronnypfannschmidt.de Wed Mar 14 01:33:04 2018 From: opensource at ronnypfannschmidt.de (RonnyPfannschmidt) Date: Wed, 14 Mar 2018 06:33:04 +0100 Subject: [pytest-dev] Adding intro talk/slides to pytest-dev repos and licensing In-Reply-To: References: Message-ID: <32537297-f7b9-c611-d988-47c7fec6b1d9@ronnypfannschmidt.de> Hi floris, thanks for starting this one :) as for my answers 1. i think its a good idea to have them ? ? we should try to enable us and the community to give basic introductions, ? perhaps even very basic workshops without too much prep work ? (some kind of basic set for open source conferences and meet-ups/user groups) ? we should also consider if and how we want to mention/advertise ? the more commercial offerings for training/workshops at that place. 2. CC-BY is closer to MIT but for documentation/prose we specifically put into the public, ? i would prefer to use the CC-BY-SA for the Share-Alike component, ? but this is not a strong preference, just my personal opinion ? that for things such as talks the CC-BY-SA is more fair 3. i would like to start a new repo for talks/workshop materials ? and personally believe CC-BY-SA is a very reasonable and fair choice ? putting emphasis on freedoms to use, ? while also ensuring that enhancements ought to be shared. ? -- Ronny Am 13.03.2018 um 22:24 schrieb Floris Bruynooghe: > Hi all, > > Ronny was interested in updating/adapting a pytest-intro talk I've given > at a meetup sometime before [0]. He suggested to add it in a repo under > the pytest-dev organisation, which seems like a fine idea. Why not all > re-use slides if they're useful. > > However this also raises the question of what license to use. Pytest > currently use MIT for everything but I'm not sure how suitable this is > for non-code. After looking at Creative Commons, which seems like the > default license for this sort of things, we came to think the > Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) [1] would be a > reasonable choice for this. Personally I wouldn't mind something closer > to MIT but it seems Creative Commons doesn't provide something like > this. > > So my questions to you all are: > > 1. Do you think having a talk/slides in a repo under the pytest-dev > organisation is a good idea? > > 2. Do you think this is a suitable license for a talk? Do you have > any better suggestion? > > 3. Are we fine with putting CC BY-SA 4.0 licensed content in the > pytest-dev organisation's repos? > > > Cheers, > Floris > > > [0] http://devork.be/talks/pytest-intro/talk.html > [1] https://creativecommons.org/licenses/by-sa/4.0/ > _______________________________________________ > pytest-dev mailing list > pytest-dev at python.org > https://mail.python.org/mailman/listinfo/pytest-dev From florian.schulze at gmx.net Wed Mar 14 04:35:13 2018 From: florian.schulze at gmx.net (Florian Schulze) Date: Wed, 14 Mar 2018 09:35:13 +0100 Subject: [pytest-dev] Moving pytest-{pep8,flakes} to pytest-dev on GitHub Message-ID: <3D2C3823-84AC-4A45-944F-BCBA98429F3F@gmx.net> Hi! I'd like to propose moving pytest-pep8 from pytest-dev on Bitbucket to GitHub and my pytest-flakes from my GitHub repository to pytest-dev. Since I use both myself all the time, I would continue maintaining them. Thoughts? Process? Regards, Florian Schulze From opensource at ronnypfannschmidt.de Wed Mar 14 05:00:36 2018 From: opensource at ronnypfannschmidt.de (RonnyPfannschmidt) Date: Wed, 14 Mar 2018 10:00:36 +0100 Subject: [pytest-dev] Moving pytest-{pep8, flakes} to pytest-dev on GitHub In-Reply-To: <3D2C3823-84AC-4A45-944F-BCBA98429F3F@gmx.net> References: <3D2C3823-84AC-4A45-944F-BCBA98429F3F@gmx.net> Message-ID: <050f0e2d-0a9f-1987-55ec-464fa3a959c4@ronnypfannschmidt.de> Sounds reasonable, i would like to note that by now i believe integrating linting into normal testing is a step back since the reporting needs are so different its one of the reasons why i dropped my work on pytest-codecheckers (which is more like flake8) -- Ronny Am 14.03.2018 um 09:35 schrieb Florian Schulze: > Hi! > > I'd like to propose moving pytest-pep8 from pytest-dev on Bitbucket to > GitHub and my pytest-flakes from my GitHub repository to pytest-dev. > Since I use both myself all the time, I would continue maintaining them. > > Thoughts? Process? > > Regards, > Florian Schulze > _______________________________________________ > pytest-dev mailing list > pytest-dev at python.org > https://mail.python.org/mailman/listinfo/pytest-dev From florian.schulze at gmx.net Wed Mar 14 05:07:06 2018 From: florian.schulze at gmx.net (Florian Schulze) Date: Wed, 14 Mar 2018 10:07:06 +0100 Subject: [pytest-dev] Moving pytest-{pep8, flakes} to pytest-dev on GitHub In-Reply-To: <050f0e2d-0a9f-1987-55ec-464fa3a959c4@ronnypfannschmidt.de> References: <3D2C3823-84AC-4A45-944F-BCBA98429F3F@gmx.net> <050f0e2d-0a9f-1987-55ec-464fa3a959c4@ronnypfannschmidt.de> Message-ID: <3B7C4773-3043-4C7D-8B1F-2697B704A22B@gmx.net> On 14 Mar 2018, at 10:00, RonnyPfannschmidt wrote: > Sounds reasonable, Ok > i would like to note that by now i believe integrating linting into > normal testing is a step back since the reporting needs are so > different > its one of the reasons why i dropped my work on pytest-codecheckers > (which is more like flake8) How do you tackle that then? I like things which work on the commit level, so one can improve things incrementally. But most solutions I saw work as pre-commit hooks, which makes them harder to enforce especially for Open Source, because all users need to install them. I haven't seen anything that works server side (or as a bot) on a per commit base (Open Source and not only as a GitHub Service). Regards, Florian Schulze > Am 14.03.2018 um 09:35 schrieb Florian Schulze: >> Hi! >> >> I'd like to propose moving pytest-pep8 from pytest-dev on Bitbucket >> to >> GitHub and my pytest-flakes from my GitHub repository to pytest-dev. >> Since I use both myself all the time, I would continue maintaining >> them. >> >> Thoughts? Process? >> >> Regards, >> Florian Schulze >> _______________________________________________ >> pytest-dev mailing list >> pytest-dev at python.org >> https://mail.python.org/mailman/listinfo/pytest-dev From opensource at ronnypfannschmidt.de Wed Mar 14 05:50:31 2018 From: opensource at ronnypfannschmidt.de (RonnyPfannschmidt) Date: Wed, 14 Mar 2018 10:50:31 +0100 Subject: [pytest-dev] Moving pytest-{pep8, flakes} to pytest-dev on GitHub In-Reply-To: <3B7C4773-3043-4C7D-8B1F-2697B704A22B@gmx.net> References: <3D2C3823-84AC-4A45-944F-BCBA98429F3F@gmx.net> <050f0e2d-0a9f-1987-55ec-464fa3a959c4@ronnypfannschmidt.de> <3B7C4773-3043-4C7D-8B1F-2697B704A22B@gmx.net> Message-ID: i like to use a tox env with flake8 set up, additionally my editor always complains about all violations (imho if linting should happen as early as typing, but at least as early as when you save a file) on top of that linting issues trigger ci failure im currently also looking at tools like sideci for the gh stuff -- Ronny Am 14.03.2018 um 10:07 schrieb Florian Schulze: > On 14 Mar 2018, at 10:00, RonnyPfannschmidt wrote: > >> Sounds reasonable, > > Ok > >> i would like to note that by now i believe integrating linting into >> normal testing is a step back since the reporting needs are so different >> its one of the reasons why i dropped my work on pytest-codecheckers >> (which is more like flake8) > > How do you tackle that then? I like things which work on the commit > level, so one can improve things incrementally. But most solutions I saw > work as pre-commit hooks, which makes them harder to enforce especially > for Open Source, because all users need to install them. I haven't seen > anything that works server side (or as a bot) on a per commit base (Open > Source and not only as a GitHub Service). > > Regards, > Florian Schulze > >> Am 14.03.2018 um 09:35 schrieb Florian Schulze: >>> Hi! >>> >>> I'd like to propose moving pytest-pep8 from pytest-dev on Bitbucket to >>> GitHub and my pytest-flakes from my GitHub repository to pytest-dev. >>> Since I use both myself all the time, I would continue maintaining them. >>> >>> Thoughts? Process? >>> >>> Regards, >>> Florian Schulze >>> _______________________________________________ >>> pytest-dev mailing list >>> pytest-dev at python.org >>> https://mail.python.org/mailman/listinfo/pytest-dev From nicoddemus at gmail.com Wed Mar 14 06:16:24 2018 From: nicoddemus at gmail.com (Bruno Oliveira) Date: Wed, 14 Mar 2018 10:16:24 +0000 Subject: [pytest-dev] Adding intro talk/slides to pytest-dev repos and licensing In-Reply-To: <32537297-f7b9-c611-d988-47c7fec6b1d9@ronnypfannschmidt.de> References: <32537297-f7b9-c611-d988-47c7fec6b1d9@ronnypfannschmidt.de> Message-ID: Hi all, Nice initiative Floris and Ronny! I definitely like the idea of having a set of talks and introductions so more people are encouraged to give talks and teach pytest, from local group meetings to bigger conferences. Cheers, Bruno On Wed, Mar 14, 2018 at 2:33 AM RonnyPfannschmidt < opensource at ronnypfannschmidt.de> wrote: > Hi floris, > > thanks for starting this one :) > > as for my answers > > 1. i think its a good idea to have them > > we should try to enable us and the community to give basic introductions, > perhaps even very basic workshops without too much prep work > (some kind of basic set for open source conferences and meet-ups/user > groups) > > we should also consider if and how we want to mention/advertise > the more commercial offerings for training/workshops at that place. > > 2. CC-BY is closer to MIT but for documentation/prose we specifically > put into the public, > i would prefer to use the CC-BY-SA for the Share-Alike component, > but this is not a strong preference, just my personal opinion > that for things such as talks the CC-BY-SA is more fair > > 3. i would like to start a new repo for talks/workshop materials > and personally believe CC-BY-SA is a very reasonable and fair choice > putting emphasis on freedoms to use, > while also ensuring that enhancements ought to be shared. > > -- Ronny > > Am 13.03.2018 um 22:24 schrieb Floris Bruynooghe: > > Hi all, > > > > Ronny was interested in updating/adapting a pytest-intro talk I've given > > at a meetup sometime before [0]. He suggested to add it in a repo under > > the pytest-dev organisation, which seems like a fine idea. Why not all > > re-use slides if they're useful. > > > > However this also raises the question of what license to use. Pytest > > currently use MIT for everything but I'm not sure how suitable this is > > for non-code. After looking at Creative Commons, which seems like the > > default license for this sort of things, we came to think the > > Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) [1] would be a > > reasonable choice for this. Personally I wouldn't mind something closer > > to MIT but it seems Creative Commons doesn't provide something like > > this. > > > > So my questions to you all are: > > > > 1. Do you think having a talk/slides in a repo under the pytest-dev > > organisation is a good idea? > > > > 2. Do you think this is a suitable license for a talk? Do you have > > any better suggestion? > > > > 3. Are we fine with putting CC BY-SA 4.0 licensed content in the > > pytest-dev organisation's repos? > > > > > > Cheers, > > Floris > > > > > > [0] http://devork.be/talks/pytest-intro/talk.html > > [1] https://creativecommons.org/licenses/by-sa/4.0/ > > _______________________________________________ > > pytest-dev mailing list > > pytest-dev at python.org > > https://mail.python.org/mailman/listinfo/pytest-dev > > _______________________________________________ > pytest-dev mailing list > pytest-dev at python.org > https://mail.python.org/mailman/listinfo/pytest-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From florian.schulze at gmx.net Wed Mar 14 06:53:09 2018 From: florian.schulze at gmx.net (Florian Schulze) Date: Wed, 14 Mar 2018 11:53:09 +0100 Subject: [pytest-dev] Moving pytest-{pep8, flakes} to pytest-dev on GitHub In-Reply-To: References: <3D2C3823-84AC-4A45-944F-BCBA98429F3F@gmx.net> <050f0e2d-0a9f-1987-55ec-464fa3a959c4@ronnypfannschmidt.de> <3B7C4773-3043-4C7D-8B1F-2697B704A22B@gmx.net> Message-ID: On 14 Mar 2018, at 10:50, RonnyPfannschmidt wrote: > i like to use a tox env with flake8 set up, This is what devpi does and in my day to day use I miss issues sometimes, because it's a separate env and running all envs takes too long to do before push, so I only see it in CI. > additionally my editor always complains about all violations > (imho if linting should happen as early as typing, but at least as > early > as when you save a file) I do the same and it catches most things up front, but again it's nothing one can easily enforce or make in your face. > on top of that linting issues trigger ci failure > im currently also looking at tools like sideci for the gh stuff That's what pytest-pep8 and pytest-flake do for me. So I guess this is more a matter of taste which tools one uses. Thanks for the insight into your workflow. I will stop here, because it gets way off topic. Regards, Florian Schulze > Am 14.03.2018 um 10:07 schrieb Florian Schulze: >> On 14 Mar 2018, at 10:00, RonnyPfannschmidt wrote: >> >>> Sounds reasonable, >> >> Ok >> >>> i would like to note that by now i believe integrating linting into >>> normal testing is a step back since the reporting needs are so >>> different >>> its one of the reasons why i dropped my work on pytest-codecheckers >>> (which is more like flake8) >> >> How do you tackle that then? I like things which work on the commit >> level, so one can improve things incrementally. But most solutions I >> saw >> work as pre-commit hooks, which makes them harder to enforce >> especially >> for Open Source, because all users need to install them. I haven't >> seen >> anything that works server side (or as a bot) on a per commit base >> (Open >> Source and not only as a GitHub Service). >> >> Regards, >> Florian Schulze >> >>> Am 14.03.2018 um 09:35 schrieb Florian Schulze: >>>> Hi! >>>> >>>> I'd like to propose moving pytest-pep8 from pytest-dev on Bitbucket >>>> to >>>> GitHub and my pytest-flakes from my GitHub repository to >>>> pytest-dev. >>>> Since I use both myself all the time, I would continue maintaining >>>> them. >>>> >>>> Thoughts? Process? >>>> >>>> Regards, >>>> Florian Schulze >>>> _______________________________________________ >>>> pytest-dev mailing list >>>> pytest-dev at python.org >>>> https://mail.python.org/mailman/listinfo/pytest-dev From nicoddemus at gmail.com Wed Mar 14 07:00:59 2018 From: nicoddemus at gmail.com (Bruno Oliveira) Date: Wed, 14 Mar 2018 11:00:59 +0000 Subject: [pytest-dev] Moving pytest-{pep8, flakes} to pytest-dev on GitHub In-Reply-To: <3D2C3823-84AC-4A45-944F-BCBA98429F3F@gmx.net> References: <3D2C3823-84AC-4A45-944F-BCBA98429F3F@gmx.net> Message-ID: Hi Florian, On Wed, Mar 14, 2018 at 5:40 AM Florian Schulze wrote: > I'd like to propose moving pytest-pep8 from pytest-dev on Bitbucket to > GitHub and my pytest-flakes from my GitHub repository to pytest-dev. > Since I use both myself all the time, I would continue maintaining them. > Sounds good, +1. Thoughts? Process? > Regarding moving the repositories from BitBucket to GitHub Ronny has some scripts to help that given he's moved some repositories already. About moving to the pytest-dev organization I believe you already have permission to do so, otherwise please let us know. Cheers, Bruno. -------------- next part -------------- An HTML attachment was scrubbed... URL: From hectorvd at gmail.com Wed Mar 14 11:12:40 2018 From: hectorvd at gmail.com (Hector Villafuerte) Date: Wed, 14 Mar 2018 11:12:40 -0400 Subject: [pytest-dev] Moving pytest-{pep8, flakes} to pytest-dev on GitHub In-Reply-To: <3B7C4773-3043-4C7D-8B1F-2697B704A22B@gmx.net> References: <3D2C3823-84AC-4A45-944F-BCBA98429F3F@gmx.net> <050f0e2d-0a9f-1987-55ec-464fa3a959c4@ronnypfannschmidt.de> <3B7C4773-3043-4C7D-8B1F-2697B704A22B@gmx.net> Message-ID: On Wed, Mar 14, 2018 at 5:07 AM, Florian Schulze wrote: > [...] > How do you tackle that then? I like things which work on the commit level, > so one can improve things incrementally. But most solutions I saw work as > pre-commit hooks, which makes them harder to enforce especially for Open > Source, because all users need to install them. I haven't seen anything > that works server side (or as a bot) on a per commit base (Open Source and > not only as a GitHub Service). > [...] > I'd like to give a shout-out to https://pre-commit.com/ It's a git hook manager that makes it easy to share hooks among developers and enforce them on CI as well. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicoddemus at gmail.com Thu Mar 15 11:10:54 2018 From: nicoddemus at gmail.com (Bruno Oliveira) Date: Thu, 15 Mar 2018 15:10:54 +0000 Subject: [pytest-dev] pytest-commit emails Message-ID: Hi everyone, I receive an email from pytest-commit at python.org whenever new merges happen on a branch of the pytest repository, but it seems it happens only for my own merges. It was my understanding I should receive for any merge, not just my own. Does this happen for anybody else or is it just me? Cheers, Bruno -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicoddemus at gmail.com Thu Mar 15 11:20:30 2018 From: nicoddemus at gmail.com (Bruno Oliveira) Date: Thu, 15 Mar 2018 15:20:30 +0000 Subject: [pytest-dev] "Reference" page is up! Message-ID: Hey everyone, I've merged the PR which includes a long-time-requested Reference page, check it out at: *https://docs.pytest.org/en/latest/reference.html * It might have some missing bits still, in which case feedback is welcome! Cheers, Bruno. -------------- next part -------------- An HTML attachment was scrubbed... URL: From christianzlong2 at gmail.com Thu Mar 15 13:41:52 2018 From: christianzlong2 at gmail.com (Christian Long) Date: Thu, 15 Mar 2018 12:41:52 -0500 Subject: [pytest-dev] "Reference" page is up! In-Reply-To: References: Message-ID: Wonderful resource, great to have it all in one place. Thanks! Christian On Thu, Mar 15, 2018 at 10:20 AM, Bruno Oliveira wrote: > Hey everyone, > > I've merged the PR which includes a long-time-requested Reference page, > check it out at: https://docs.pytest.org/en/latest/reference.html > > It might have some missing bits still, in which case feedback is welcome! > > Cheers, > Bruno. > > _______________________________________________ > pytest-dev mailing list > pytest-dev at python.org > https://mail.python.org/mailman/listinfo/pytest-dev > From nicoddemus at gmail.com Thu Mar 15 13:52:14 2018 From: nicoddemus at gmail.com (Bruno Oliveira) Date: Thu, 15 Mar 2018 17:52:14 +0000 Subject: [pytest-dev] pytest 3.5 soon? Message-ID: Hi everyone, pytest 3.4 was released on 2018-01-30, and we had agreed that we should make a new minor release every 2 months or so to allow users to enjoy the new features and fixes in the ?features? branch. For 3.5 one of the major features was to replace the internal warning subsystem by the builtin warnings (#2452 ), but frankly I have not even started that task and it might take some time to iron out all the details involved. I propose we move #2452 to 3.6 and release 3.5 next week or so, given that we already have a ton of new goodies waiting . Cheers, Bruno. ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From opensource at ronnypfannschmidt.de Thu Mar 15 15:26:13 2018 From: opensource at ronnypfannschmidt.de (RonnyPfannschmidt) Date: Thu, 15 Mar 2018 20:26:13 +0100 Subject: [pytest-dev] pytest 3.5 soon? In-Reply-To: References: Message-ID: <681c4a32-5978-2c68-296c-e2a3a39eea12@ronnypfannschmidt.de> +1 the next big things i will try to work on are * a new marker api and * a actually correct config initialization ? (the one we currently do is broken and creates bugs in xdist) i hope to get a part in for 3.5 but i believe most of the important things will need 3.6 -- Ronny Am 15.03.2018 um 18:52 schrieb Bruno Oliveira: > > Hi everyone, > > pytest 3.4 was released on 2018-01-30, and we had agreed that we > should make a new minor release every 2 months or so to allow users to > enjoy the new features and fixes in the ?features? branch. > > For 3.5 one of the major features was to replace the internal warning > subsystem by the builtin warnings (#2452 > ), but frankly I > have not even started that task and it might take some time to iron > out all the details involved. > > I propose we move |#2452| to 3.6 and release 3.5 next week or so, > given that we already have a ton of new goodies waiting > . > > Cheers, > Bruno. > > ? > > > _______________________________________________ > pytest-dev mailing list > pytest-dev at python.org > https://mail.python.org/mailman/listinfo/pytest-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicoddemus at gmail.com Thu Mar 15 15:30:18 2018 From: nicoddemus at gmail.com (Bruno Oliveira) Date: Thu, 15 Mar 2018 19:30:18 +0000 Subject: [pytest-dev] pytest 3.5 soon? In-Reply-To: <681c4a32-5978-2c68-296c-e2a3a39eea12@ronnypfannschmidt.de> References: <681c4a32-5978-2c68-296c-e2a3a39eea12@ronnypfannschmidt.de> Message-ID: On Thu, Mar 15, 2018 at 4:26 PM RonnyPfannschmidt < opensource at ronnypfannschmidt.de> wrote: > +1 > > the next big things i will try to work on are > > * a new marker api and > * a actually correct config initialization > (the one we currently do is broken and creates bugs in xdist) > Sounds good to me! > i hope to get a part in for 3.5 but i believe most of the important things > will need 3.6 > I was thinking of releasing 3.5 like next week, you think you can squeeze something until then or would you like more time? Cheers, Bruno. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicoddemus at gmail.com Thu Mar 15 17:42:23 2018 From: nicoddemus at gmail.com (Bruno Oliveira) Date: Thu, 15 Mar 2018 21:42:23 +0000 Subject: [pytest-dev] Fixture ordering and scopes Message-ID: Hi everyone and Holger, Looking at the code below: data = {} @pytest.fixture(scope='session')def clean_data(): data.clear() @pytest.fixture(autouse=True)def add_data(): data['value'] = 1 @pytest.mark.usefixtures('clean_data')def test_foo(): assert data.get('value') Should test_foo fail or pass? Keep your answer in mind before proceeding. :) ------------------------------ I ask this because *unrelated* fixtures, i.e. which don?t depend on each other, are executed in the order they are declared in the test function signature, regardless of scope. The example above *fails*, because the fixtures are executed in (add_data, clean_data) order: add_data, being *autouse*, is added to the beginning of the argument list, and afterwards clean_data is inserted because of the usefixtures mark. This came up in #2405 , where Jason Coombs assumed that clean_data, being session-scoped, would always be executed first. I wonder if the current state of things is by design, or just an accident of how things work? I opened up a PR which does sort parameters by scope while keeping the relative order of fixtures of same scope intact, and the test suite passes without failures so if the current behavior is by design there are not tests enforcing it. Using code in the PR, then things might also be surprising: @pytest.fixture(scope='session')def s(): pass @pytest.fixture(scope='function')def f(): pass def test_foo(f, s): pass s and f are unrelated and execute in (f, s) order in master, but in (s, f) order in my branch. Would like to hear what people think about this matter, as I think it is an important one, specially *Holger* if this is a design decision or just an accident of implementation, and if we should change it. Also, please feel free to just reply with what you thought what should be the behavior of the first sample. Cheers, Bruno. ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicoddemus at gmail.com Thu Mar 15 18:19:10 2018 From: nicoddemus at gmail.com (Bruno Oliveira) Date: Thu, 15 Mar 2018 22:19:10 +0000 Subject: [pytest-dev] No traceback filtering with __tracebackhide__ attribute using custom pytest plugin In-Reply-To: References: Message-ID: Hi Ringo, On Tue, Mar 13, 2018 at 6:47 AM Ringo De Smet ringo.de.smet at ontoforce.com wrote: > Am I right in saying that the pytest_runtest_makereport hook from > runner.py is the one calling into the filtering of the traceback and using > the __tracebackhide__ attribute? Why isn't this done for my custom test > suite? > You are right, pytest_runtest_makereport creates a longrepr instance depending on excinfo attribute of the call object[1]: https://github.com/pytest-dev/pytest/blob/fbcf1a90c9ffa849827918249fef1721a1f43bdd/_pytest/runner.py#L294-L307 To do that it calls item.repr_failure(excinfo), which calls _repr_failure_py in python.py which ends up calling _prunetraceback which filters the traceback here: https://github.com/pytest-dev/pytest/blob/fbcf1a90c9ffa849827918249fef1721a1f43bdd/_pytest/python.py#L578 .filter() is declared here: https://github.com/pytest-dev/pytest/blob/fbcf1a90c9ffa849827918249fef1721a1f43bdd/_pytest/_code/code.py#L307 And the default value uses a lambda where ishidden() checks for __tracebackhide__ attribute in the function?s locals. Unfortunately I?m not sure why this does not work with your plugin, but I hope this gives an overview and helps finding out the problem. Cheers, Bruno. ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From variedthoughts at gmail.com Thu Mar 15 18:47:37 2018 From: variedthoughts at gmail.com (Brian Okken) Date: Thu, 15 Mar 2018 15:47:37 -0700 Subject: [pytest-dev] Fixture ordering and scopes In-Reply-To: References: Message-ID: Bruno, Please, merge that PR! caveat: I have not reviewed the code. However, ... I get numerous questions about it, and I always tell people to create artificial dependencies between fixtures that need to run in a certain order. The general mental model that people have for fixtures is that they are run in scope order. I think the current behavior of file order overriding scope order is insane and a bug. - Brian On Thu, Mar 15, 2018 at 2:42 PM, Bruno Oliveira wrote: > Hi everyone and Holger, > > Looking at the code below: > > > data = {} > @pytest.fixture(scope='session')def clean_data(): > data.clear() > @pytest.fixture(autouse=True)def add_data(): > data['value'] = 1 > @pytest.mark.usefixtures('clean_data')def test_foo(): > assert data.get('value') > > Should test_foo fail or pass? Keep your answer in mind before proceeding. > :) > ------------------------------ > > I ask this because *unrelated* fixtures, i.e. which don?t depend on each > other, are executed in the order they are declared in the test function > signature, regardless of scope. > > The example above *fails*, because the fixtures are executed in (add_data, > clean_data) order: add_data, being *autouse*, is added to the beginning > of the argument list, and afterwards clean_data is inserted because of > the usefixtures mark. > > This came up in #2405 , > where Jason Coombs assumed that clean_data, being session-scoped, would > always be executed first. > > I wonder if the current state of things is by design, or just an accident > of how things work? > > I opened up a PR which > does sort parameters by scope while keeping the relative order of fixtures > of same scope intact, and the test suite passes without failures so if the > current behavior is by design there are not tests enforcing it. Using code > in the PR, then things might also be surprising: > > @pytest.fixture(scope='session')def s(): pass > @pytest.fixture(scope='function')def f(): pass > def test_foo(f, s): > pass > > s and f are unrelated and execute in (f, s) order in master, but in (s, f) > order in my branch. > > Would like to hear what people think about this matter, as I think it is > an important one, specially *Holger* if this is a design decision or just > an accident of implementation, and if we should change it. > > Also, please feel free to just reply with what you thought what should be > the behavior of the first sample. > > Cheers, > Bruno. > ? > > _______________________________________________ > pytest-dev mailing list > pytest-dev at python.org > https://mail.python.org/mailman/listinfo/pytest-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From isaulv at gmail.com Thu Mar 15 19:01:52 2018 From: isaulv at gmail.com (Isaul Vargas) Date: Thu, 15 Mar 2018 19:01:52 -0400 Subject: [pytest-dev] Fixture ordering and scopes In-Reply-To: References: Message-ID: I too have observed this weird ordering. I would like like the order of fixtures to always respect the order of scopes. On Thu, Mar 15, 2018 at 6:47 PM, Brian Okken wrote: > Bruno, > > Please, merge that PR! > caveat: I have not reviewed the code. > > However, ... > I get numerous questions about it, and I always tell people to create > artificial dependencies between fixtures that need to run in a certain > order. > The general mental model that people have for fixtures is that they are > run in scope order. > I think the current behavior of file order overriding scope order is > insane and a bug. > > - Brian > > On Thu, Mar 15, 2018 at 2:42 PM, Bruno Oliveira > wrote: > >> Hi everyone and Holger, >> >> Looking at the code below: >> >> >> data = {} >> @pytest.fixture(scope='session')def clean_data(): >> data.clear() >> @pytest.fixture(autouse=True)def add_data(): >> data['value'] = 1 >> @pytest.mark.usefixtures('clean_data')def test_foo(): >> assert data.get('value') >> >> Should test_foo fail or pass? Keep your answer in mind before >> proceeding. :) >> ------------------------------ >> >> I ask this because *unrelated* fixtures, i.e. which don?t depend on each >> other, are executed in the order they are declared in the test function >> signature, regardless of scope. >> >> The example above *fails*, because the fixtures are executed in (add_data, >> clean_data) order: add_data, being *autouse*, is added to the beginning >> of the argument list, and afterwards clean_data is inserted because of >> the usefixtures mark. >> >> This came up in #2405 , >> where Jason Coombs assumed that clean_data, being session-scoped, would >> always be executed first. >> >> I wonder if the current state of things is by design, or just an accident >> of how things work? >> >> I opened up a PR which >> does sort parameters by scope while keeping the relative order of fixtures >> of same scope intact, and the test suite passes without failures so if the >> current behavior is by design there are not tests enforcing it. Using code >> in the PR, then things might also be surprising: >> >> @pytest.fixture(scope='session')def s(): pass >> @pytest.fixture(scope='function')def f(): pass >> def test_foo(f, s): >> pass >> >> s and f are unrelated and execute in (f, s) order in master, but in (s, >> f) order in my branch. >> >> Would like to hear what people think about this matter, as I think it is >> an important one, specially *Holger* if this is a design decision or >> just an accident of implementation, and if we should change it. >> >> Also, please feel free to just reply with what you thought what should be >> the behavior of the first sample. >> >> Cheers, >> Bruno. >> ? >> >> _______________________________________________ >> pytest-dev mailing list >> pytest-dev at python.org >> https://mail.python.org/mailman/listinfo/pytest-dev >> >> > > _______________________________________________ > pytest-dev mailing list > pytest-dev at python.org > https://mail.python.org/mailman/listinfo/pytest-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicoddemus at gmail.com Thu Mar 15 21:58:34 2018 From: nicoddemus at gmail.com (Bruno Oliveira) Date: Fri, 16 Mar 2018 01:58:34 +0000 Subject: [pytest-dev] preparing a breaking internal change - splitting Session into the node and the plugin In-Reply-To: References: <4b029c5a-4e58-4771-2763-bd466f14c02f@ronnypfannschmidt.de> <9726F379-58B6-422B-BAFF-76F7205F3842@gmx.net> Message-ID: Hi everyone, On Fri, Mar 9, 2018 at 10:46 AM RonnyPfannschmidt < opensource at ronnypfannschmidt.de> wrote: > > > I wouldn't take a major release to cram in as much changes as possible. > > IMHO it's fine to have a major release for just one breaking change. > > That way it's easier to manage possible fallout and build confidence > > that major releases aren't *that* bad. The possibility of proper > > deprecations trumps the wish to clean up as much as possible in one go. > > Frequent small steps are better than big steps every once in a while. > > The goal will be the same, but with less disruption. > > really good point, this reminds me of the way setuptools handles things > - each major release only handles one singular point, which in turn > ensures smooth transitions > Good idea, +1 to introduce just one major change during each major release. Cheers, Bruno. -------------- next part -------------- An HTML attachment was scrubbed... URL: From opensource at ronnypfannschmidt.de Fri Mar 16 01:39:09 2018 From: opensource at ronnypfannschmidt.de (Ronny Pfannschmidt) Date: Fri, 16 Mar 2018 06:39:09 +0100 Subject: [pytest-dev] pytest 3.5 soon? In-Reply-To: References: <681c4a32-5978-2c68-296c-e2a3a39eea12@ronnypfannschmidt.de> Message-ID: <1521178749.13191.6.camel@ronnypfannschmidt.de> Am Donnerstag, den 15.03.2018, 19:30 +0000 schrieb Bruno Oliveira: > On Thu, Mar 15, 2018 at 4:26 PM RonnyPfannschmidt annschmidt.de> wrote: > > > > > > > > > > +1 > > > > the next big things i will try to work on are > > > > > > > > * a new marker api and > > > > * a actually correct config initialization > > > > (the one we currently do is broken and creates bugs in > > xdist) > > Sounds good to me! > > > i hope to get a part in for 3.5 but i believe most of the > > important things will need 3.6 > > I was thinking of releasing 3.5 like next week, you think you can > squeeze something until then or would you like more time? please dont defer a timed release for my convenience - if i fail to get the changes in thatsa failure i can easyly accept, but i wouldn't want to delay releases for that cheers,Ronny > Cheers, > Bruno. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ringo.de.smet at ontoforce.com Fri Mar 16 04:18:18 2018 From: ringo.de.smet at ontoforce.com (Ringo De Smet) Date: Fri, 16 Mar 2018 09:18:18 +0100 Subject: [pytest-dev] No traceback filtering with __tracebackhide__ attribute using custom pytest plugin In-Reply-To: References: Message-ID: Bruno, On Thu, Mar 15, 2018 at 11:19 PM, Bruno Oliveira wrote: > You are right, pytest_runtest_makereport creates a longrepr instance > depending on excinfo attribute of the call object[1]: > > https://github.com/pytest-dev/pytest/blob/fbcf1a90c9ffa849827918249fef17 > 21a1f43bdd/_pytest/runner.py#L294-L307 > > To do that it calls item.repr_failure(excinfo), which calls > _repr_failure_py in python.py which ends up calling _prunetraceback which > filters the traceback here: > > https://github.com/pytest-dev/pytest/blob/fbcf1a90c9ffa849827918249fef17 > 21a1f43bdd/_pytest/python.py#L578 > > .filter() is declared here: > > https://github.com/pytest-dev/pytest/blob/fbcf1a90c9ffa849827918249fef17 > 21a1f43bdd/_pytest/_code/code.py#L307 > > And the default value uses a lambda where ishidden() checks for > __tracebackhide__ attribute in the function?s locals. > You definitely pointed me in the right direction by mentioning that the filtering passes via an item object. So thanks a lot for this info. Checking the class hierarchy, I found this (simplified): class Node(object) # nodes.py def _repr_failure_py repr_failure = _repr_failure_py def _pruntraceback: *pass* # No basic filtering here class Item(Node) # nodes.py ... # no overriden methods here class Function(FunctionMixin, Item) # python.py def _repr_failure_py def repr_failure def _prunetraceback # Whole filtering mechanism is triggered here! And here is my code: class SpecExample(Item) # no overridden methods here I think it's clear now why it doesn't work. Since I provide instances of SpecExample to the pytest runner, the filtering defined in FunctionMixin is not triggered. As a solution I overridden _prunetraceback: def _prunetraceback(self, excinfo): filtered_traceback = excinfo.traceback.filter(filter_traceback) filtered_traceback = filtered_traceback.filter() excinfo.traceback = filtered_traceback With this in place, my stack trace becomes this now: ============================= test session starts ============================== platform darwin -- Python 3.6.4, pytest-3.4.2, py-1.5.2, pluggy-0.6.0 rootdir: /Users/ringods/Projects/ontoforce/metis/execution_layer/spec, inifile: plugins: mamba-1.0.0 collected 6 items action_base_spec.py ....F action_base_spec.py:None () ../python-mamba/pytest_mamba/plugin.py:135: in runtest raise mamba_error.exception ../../venv/lib/python3.6/site-packages/mamba/example.py:43: in _execute_test self.test(execution_context) action_base_spec.py:44: in 00000007__it is ok-- expect(rs).to(equal('value2')) E AssertionError: E expected: 'value' to equal 'value2' . [100%] =================================== FAILURES =================================== Now this is a bit dirty as I have code duplication from what you have in FunctionMixin. Let's look at pytest from 2 different angles: - pytest as a test runner (nodes.py, runner.py) - pytest as the test implementation framework (python.py/unittest.py) I would expect the basic traceback filtering to work via the runner, irrespective of which custom tests a thirdparty plugin provides. What do I consider basic filtering? Filtering out the traceback entries of the runner as defined in python.py:filter_traceback function (pluggy, etc.) as well as the default filter function regarding __tracebackhide__. This is exactly what I activated via my custom _prunetraceback method. May I suggest to push this basic filtering higher in that class hierarchy? You will probably know better where that is/should be, but I guess in class Item would be a good fit. Should I file this as an improvement on Github? Greetings, Ringo -------------- next part -------------- An HTML attachment was scrubbed... URL: From flub at devork.be Fri Mar 16 04:23:19 2018 From: flub at devork.be (Floris Bruynooghe) Date: Fri, 16 Mar 2018 09:23:19 +0100 Subject: [pytest-dev] pytest-commit emails In-Reply-To: References: Message-ID: Bruno Oliveira writes: > Hi everyone, > > I receive an email from pytest-commit at python.org whenever new merges happen > on a branch of the pytest repository, but it seems it happens only for my > own merges. It was my understanding I should receive for any merge, not > just my own. > > Does this happen for anybody else or is it just me? Hmm, the last email there not from you which I see is from me in May last year... It's plausible that that's the last time I merged something though. But I'd have guessed that at least Ronny might have merged things as well since then? The configuration on github looks fine to me, maybe the list administrator (Holger?) could check the mailing list settings? Cheers Floris From flub at devork.be Fri Mar 16 04:42:38 2018 From: flub at devork.be (Floris Bruynooghe) Date: Fri, 16 Mar 2018 09:42:38 +0100 Subject: [pytest-dev] Fixture ordering and scopes In-Reply-To: References: Message-ID: Bruno Oliveira writes: > Hi everyone and Holger, > > Looking at the code below: > > > data = {} > @pytest.fixture(scope='session')def clean_data(): > data.clear() > @pytest.fixture(autouse=True)def add_data(): > data['value'] = 1 > @pytest.mark.usefixtures('clean_data')def test_foo(): > assert data.get('value') > > Should test_foo fail or pass? Keep your answer in mind before > proceeding. :) Nice example. To be honest, my first reaction was that this is probably undefined. Part of my unhelpful side also thinks this is horrible anyway so why should we help people write bad code? > I ask this because *unrelated* fixtures, i.e. which don?t depend on each > other, are executed in the order they are declared in the test function > signature, regardless of scope. That the order is defined *is* surprising to me. And it honestly smells like an implementation detail. > I opened up a PR which > does sort parameters by scope while keeping the relative order of fixtures > of same scope intact, and the test suite passes without failures so if the > current behavior is by design there are not tests enforcing it. Using code > in the PR, then things might also be surprising: > > @pytest.fixture(scope='session')def s(): pass > @pytest.fixture(scope='function')def f(): pass > def test_foo(f, s): > pass > > s and f are unrelated and execute in (f, s) order in master, but in (s, f) > order in my branch. This seems very reasonable to me, but then I never considered the order of arguments important at all. I don't really mind introducing the patch you suggest. But the pedant in me would still not want to make any more guarantees other then the scoping, i.e. if you have test_foo(f1, f2, s) then the order of f1,f2 should remain undefined despite that s is going to be called first. > Would like to hear what people think about this matter, as I think it is an > important one, specially *Holger* if this is a design decision or just an > accident of implementation, and if we should change it. My biggest worry is that this is a slippery slope. E.g. why isn't a session-scoped fixture initialised right at the start of the session? Once your mental model has adjusted to this the current behaviour is pretty natural (and probably why I don't see it as an issue). But I accept I know probably too much about pytest and do think the PR might make things more intuitive for people. Cheers, Floris From flub at devork.be Fri Mar 16 04:46:09 2018 From: flub at devork.be (Floris Bruynooghe) Date: Fri, 16 Mar 2018 09:46:09 +0100 Subject: [pytest-dev] Fixture ordering and scopes In-Reply-To: References: Message-ID: Brian Okken writes: > I get numerous questions about it, and I always tell people to create > artificial dependencies between fixtures that need to run in a certain > order. I'm not sure I follow why you consider them to be *artificial* dependencies. If a function-scoped fixture depends on anything of a session-scoped fixture, surely you'd rather have the dependency explicitly in your face rather then it slipping past by luck? > The general mental model that people have for fixtures is that they are run > in scope order. > I think the current behavior of file order overriding scope order is insane > and a bug. My general mental model of fixtures is that they are as lazy as possible. This is probably also somewhat incorrect. From kvas.it at gmail.com Fri Mar 16 06:16:50 2018 From: kvas.it at gmail.com (Vasily Kuznetsov) Date: Fri, 16 Mar 2018 10:16:50 +0000 Subject: [pytest-dev] Fixture ordering and scopes In-Reply-To: References: Message-ID: I very much agree with Floris that if you need fixture A to run before fixture B and otherwise things break, this is called "dependency" and it's better if it's explicitly declared. I can't easily imagine a situation where declaring dependencies would be too much work or not desirable for some other reason but maybe it's just my imagination not being good enough :). Outer scopes running before inner scopes does sound kind of logical but everything running lazily on demand (as it does now) also makes sense. If asked to choose, I'd probably leave the order unspecified to help people not forget to declare their dependencies and to have more flexibility of implementation. Maybe this should be addressed with documentation instead. Cheers, Vasily On Fri, Mar 16, 2018 at 9:46 AM Floris Bruynooghe wrote: > Brian Okken writes: > > I get numerous questions about it, and I always tell people to create > > artificial dependencies between fixtures that need to run in a certain > > order. > > I'm not sure I follow why you consider them to be *artificial* > dependencies. If a function-scoped fixture depends on anything of a > session-scoped fixture, surely you'd rather have the dependency > explicitly in your face rather then it slipping past by luck? > > > The general mental model that people have for fixtures is that they are > run > > in scope order. > > I think the current behavior of file order overriding scope order is > insane > > and a bug. > > My general mental model of fixtures is that they are as lazy as > possible. This is probably also somewhat incorrect. > _______________________________________________ > pytest-dev mailing list > pytest-dev at python.org > https://mail.python.org/mailman/listinfo/pytest-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicoddemus at gmail.com Fri Mar 16 07:29:16 2018 From: nicoddemus at gmail.com (Bruno Oliveira) Date: Fri, 16 Mar 2018 11:29:16 +0000 Subject: [pytest-dev] Fixture ordering and scopes In-Reply-To: References: Message-ID: Hi Floris, On Fri, Mar 16, 2018 at 5:42 AM Floris Bruynooghe wrote: > Bruno Oliveira writes: > > > Hi everyone and Holger, > > > > Looking at the code below: > > > > > > data = {} > > @pytest.fixture(scope='session')def clean_data(): > > data.clear() > > @pytest.fixture(autouse=True)def add_data(): > > data['value'] = 1 > > @pytest.mark.usefixtures('clean_data')def test_foo(): > > assert data.get('value') > > > > Should test_foo fail or pass? Keep your answer in mind before > > proceeding. :) > > Nice example. To be honest, my first reaction was that this is probably > undefined. > > Part of my unhelpful side also thinks this is horrible anyway so why > should we help people write bad code? > > > I ask this because *unrelated* fixtures, i.e. which don?t depend on each > > other, are executed in the order they are declared in the test function > > signature, regardless of scope. > > That the order is defined *is* surprising to me. And it honestly smells > like an implementation detail. > > > I opened up a PR which > > does sort parameters by scope while keeping the relative order of > fixtures > > of same scope intact, and the test suite passes without failures so if > the > > current behavior is by design there are not tests enforcing it. Using > code > > in the PR, then things might also be surprising: > > > > @pytest.fixture(scope='session')def s(): pass > > @pytest.fixture(scope='function')def f(): pass > > def test_foo(f, s): > > pass > > > > s and f are unrelated and execute in (f, s) order in master, but in (s, > f) > > order in my branch. > > This seems very reasonable to me, but then I never considered the order > of arguments important at all. I don't really mind introducing the > patch you suggest. But the pedant in me would still not want to make > any more guarantees other then the scoping, i.e. if you have > test_foo(f1, f2, s) then the order of f1,f2 should remain undefined > despite that s is going to be called first. > While I agree with the sentiment to not try to make too much guarantees so we can change things in the future if so desired, in this case I think preserving the order of the arguments for fixtures of same scope makes sense and is useful. For example suppose you have two session scoped fixtures from different libraries which don't know about each other: * `log_setup` from `core_logging` setups the builtin logging module in a certain way (logs to a central server for instance). * `db_setup` from `db` configures the database for testing, and makes use of the builtin logging to log where the database is being created, etc. In my application tests, which use `core_logging` and `db`, I want to setup the fixtures in the correct order and the canonical way for that would be to define an autouse session-scoped fixture in my conftest.py: @pytest.fixture(scope='session', autouse=True) def setup_logging_and_db(log_setup, db_setup): pass So the order is important here, and leaving it undefined will require people to write this instead: @pytest.fixture(scope='session', autouse=True) def my_setup_logging(log_setup): pass @pytest.fixture(scope='session', autouse=True) def my_db_setup(my_setup_logging ): pass While it works, it feels like unnecessary boilerplate. The example above is not something I made up btw, I encountered that situation more than once at work: fixtures from unrelated projects that should be used together in a certain order, and not by a design mistake, but just how things are supposed to work. > Would like to hear what people think about this matter, as I think it is > an > > important one, specially *Holger* if this is a design decision or just an > > accident of implementation, and if we should change it. > > My biggest worry is that this is a slippery slope. E.g. why isn't a > session-scoped fixture initialised right at the start of the session? > Once your mental model has adjusted to this the current behaviour is > pretty natural (and probably why I don't see it as an issue). But I > accept I know probably too much about pytest and do think the PR might > make things more intuitive for people. > I think the issue here is people getting tripped by it, and not the case that they are having inter-dependencies between fixtures that are implicit, but that they assume if a test function requests fixtures of different scopes, the higher scopes will be instantiated first. Cheers, Bruno. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicoddemus at gmail.com Fri Mar 16 07:34:09 2018 From: nicoddemus at gmail.com (Bruno Oliveira) Date: Fri, 16 Mar 2018 11:34:09 +0000 Subject: [pytest-dev] Fixture ordering and scopes In-Reply-To: References: Message-ID: Hi Vasily! On Fri, Mar 16, 2018 at 7:17 AM Vasily Kuznetsov wrote: > I very much agree with Floris that if you need fixture A to run before > fixture B and otherwise things break, this is called "dependency" and it's > better if it's explicitly declared. > Definitely, if a fixture requires something that is done by another fixture, then that dependency should be explicitly defined; but the issue is more that people expect higher level scoped fixtures to be executed first, and when you mix autouse fixtures and usefixtures markers, the order is non-intuitive. > I can't easily imagine a situation where declaring dependencies would be > too much work or not desirable for some other reason but maybe it's just my > imagination not being good enough :). > I wrote an example which demonstrates this in a separate reply to Floris. Outer scopes running before inner scopes does sound kind of logical but > everything running lazily on demand (as it does now) also makes sense. > Just to be clear, in my PR fixtures are still created lazily, it is just that we sort them by scope (preserving order) first. Cheers, Bruno. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kvas.it at gmail.com Fri Mar 16 08:18:26 2018 From: kvas.it at gmail.com (Vasily Kuznetsov) Date: Fri, 16 Mar 2018 12:18:26 +0000 Subject: [pytest-dev] Fixture ordering and scopes In-Reply-To: References: Message-ID: Hi Bruno, Your example is a good argument, I haven't considered third party fixtures over which the developer of the test suite has less control. I agree that requiring users to jumps through extra hoops to give third-party fixtures the desired order is too high of a price to pay for the nudge towards good style (explicitly declaring dependencies) that we get from undefined order. Once we decide to give some guarantees about the order of fixture execution your new approach certainly makes sense. Cheers, Vasily On Fri, Mar 16, 2018 at 12:34 PM Bruno Oliveira wrote: > Hi Vasily! > > On Fri, Mar 16, 2018 at 7:17 AM Vasily Kuznetsov > wrote: > >> I very much agree with Floris that if you need fixture A to run before >> fixture B and otherwise things break, this is called "dependency" and it's >> better if it's explicitly declared. >> > > Definitely, if a fixture requires something that is done by another > fixture, then that dependency should be explicitly defined; but the issue > is more that people expect higher level scoped fixtures to be executed > first, and when you mix autouse fixtures and usefixtures markers, the order > is non-intuitive. > > > >> I can't easily imagine a situation where declaring dependencies would be >> too much work or not desirable for some other reason but maybe it's just my >> imagination not being good enough :). >> > > I wrote an example which demonstrates this in a separate reply to Floris. > > Outer scopes running before inner scopes does sound kind of logical but >> everything running lazily on demand (as it does now) also makes sense. >> > > Just to be clear, in my PR fixtures are still created lazily, it is just > that we sort them by scope (preserving order) first. > > Cheers, > Bruno. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicoddemus at gmail.com Fri Mar 16 09:09:21 2018 From: nicoddemus at gmail.com (Bruno Oliveira) Date: Fri, 16 Mar 2018 13:09:21 +0000 Subject: [pytest-dev] pytest-commit emails In-Reply-To: References: Message-ID: On Fri, Mar 16, 2018 at 5:23 AM Floris Bruynooghe wrote: > Bruno Oliveira writes: > > > Hi everyone, > > > > I receive an email from pytest-commit at python.org whenever new merges > happen > > on a branch of the pytest repository, but it seems it happens only for my > > own merges. It was my understanding I should receive for any merge, not > > just my own. > > > > Does this happen for anybody else or is it just me? > > Hmm, the last email there not from you which I see is from me in May > last year... It's plausible that that's the last time I merged > something though. But I'd have guessed that at least Ronny might have > merged things as well since then? > > The configuration on github looks fine to me, maybe the list > administrator (Holger?) could check the mailing list settings? > I just merged a PR and received an email with the subject: [Pytest-commit] [pytest-dev/pytest] 3c3fc3: Add test capturing new expectation. Ref #3314. Did anybody else receive this? Cheers, Bruno > -------------- next part -------------- An HTML attachment was scrubbed... URL: From isaulv at gmail.com Fri Mar 16 09:27:49 2018 From: isaulv at gmail.com (Isaul Vargas) Date: Fri, 16 Mar 2018 09:27:49 -0400 Subject: [pytest-dev] Fixture ordering and scopes In-Reply-To: References: Message-ID: Another reason in support of a well defined order is because some library authors will create function scoped fixtures that depend on the objects in a session scoped fixture. If those tests do run out of order, tests will break. On Fri, Mar 16, 2018 at 8:18 AM, Vasily Kuznetsov wrote: > Hi Bruno, > > Your example is a good argument, I haven't considered third party fixtures > over which the developer of the test suite has less control. I agree that > requiring users to jumps through extra hoops to give third-party fixtures > the desired order is too high of a price to pay for the nudge towards good > style (explicitly declaring dependencies) that we get from undefined order. > > Once we decide to give some guarantees about the order of fixture > execution your new approach certainly makes sense. > > Cheers, > Vasily > > > On Fri, Mar 16, 2018 at 12:34 PM Bruno Oliveira > wrote: > >> Hi Vasily! >> >> On Fri, Mar 16, 2018 at 7:17 AM Vasily Kuznetsov >> wrote: >> >>> I very much agree with Floris that if you need fixture A to run before >>> fixture B and otherwise things break, this is called "dependency" and it's >>> better if it's explicitly declared. >>> >> >> Definitely, if a fixture requires something that is done by another >> fixture, then that dependency should be explicitly defined; but the issue >> is more that people expect higher level scoped fixtures to be executed >> first, and when you mix autouse fixtures and usefixtures markers, the order >> is non-intuitive. >> >> >> >>> I can't easily imagine a situation where declaring dependencies would be >>> too much work or not desirable for some other reason but maybe it's just my >>> imagination not being good enough :). >>> >> >> I wrote an example which demonstrates this in a separate reply to Floris. >> >> Outer scopes running before inner scopes does sound kind of logical but >>> everything running lazily on demand (as it does now) also makes sense. >>> >> >> Just to be clear, in my PR fixtures are still created lazily, it is just >> that we sort them by scope (preserving order) first. >> >> Cheers, >> Bruno. >> > > _______________________________________________ > pytest-dev mailing list > pytest-dev at python.org > https://mail.python.org/mailman/listinfo/pytest-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From flub at devork.be Fri Mar 16 13:35:58 2018 From: flub at devork.be (Floris Bruynooghe) Date: Fri, 16 Mar 2018 18:35:58 +0100 Subject: [pytest-dev] Fixture ordering and scopes In-Reply-To: References: Message-ID: Bruno Oliveira writes: > > On Fri, Mar 16, 2018 at 5:42 AM Floris Bruynooghe wrote: > >> Bruno Oliveira writes: >> > For example suppose you have two session scoped fixtures from different > libraries which don't know about each other: > > * `log_setup` from `core_logging` setups the builtin logging module in a > certain way (logs to a central server for instance). > * `db_setup` from `db` configures the database for testing, and makes use > of the builtin logging to log where the database is being created, etc. > > In my application tests, which use `core_logging` and `db`, I want to setup > the fixtures in the correct order and the canonical way for that would be > to define an autouse session-scoped fixture in my conftest.py: > > @pytest.fixture(scope='session', autouse=True) > def setup_logging_and_db(log_setup, db_setup): pass > > So the order is important here, and leaving it undefined will require > people to write this instead: > > @pytest.fixture(scope='session', autouse=True) > def my_setup_logging(log_setup): pass > > @pytest.fixture(scope='session', autouse=True) > def my_db_setup(my_setup_logging ): pass > > While it works, it feels like unnecessary boilerplate. I'm not sure you finished your example there. I'm actually curious what you'd do. It is a great example, it really stretches the composability of fixtures. If someone where to ask me this though I think the only thing I'd come up with is this: import db @pytest.fixture(scope='session', autouse=True) def my_db(log_setup): return db.db_setup() # This needs to do the right thing. I'm actually pretty fine with this, fixtures are after all just a convenient shortcut to do dependency injection into your test functions. There's no reason that fixtures have be able to do everything we can already do in a programming language. But as much as I may not like it, I guess it is a good argument for guaranteeing the behaviour you use. From flub at devork.be Fri Mar 16 14:18:54 2018 From: flub at devork.be (Floris Bruynooghe) Date: Fri, 16 Mar 2018 19:18:54 +0100 Subject: [pytest-dev] pytest-commit emails In-Reply-To: References: Message-ID: Bruno Oliveira writes: > On Fri, Mar 16, 2018 at 5:23 AM Floris Bruynooghe wrote: > >> Bruno Oliveira writes: >> >> > Hi everyone, >> > >> > I receive an email from pytest-commit at python.org whenever new merges >> happen >> > on a branch of the pytest repository, but it seems it happens only for my >> > own merges. It was my understanding I should receive for any merge, not >> > just my own. >> > >> > Does this happen for anybody else or is it just me? >> >> Hmm, the last email there not from you which I see is from me in May >> last year... It's plausible that that's the last time I merged >> something though. But I'd have guessed that at least Ronny might have >> merged things as well since then? >> >> The configuration on github looks fine to me, maybe the list >> administrator (Holger?) could check the mailing list settings? >> > > I just merged a PR and received an email with the subject: > > [Pytest-commit] [pytest-dev/pytest] 3c3fc3: Add test capturing new > expectation. Ref #3314. > > Did anybody else receive this? Yes, I received this as well. From nicoddemus at gmail.com Fri Mar 16 14:52:57 2018 From: nicoddemus at gmail.com (Bruno Oliveira) Date: Fri, 16 Mar 2018 18:52:57 +0000 Subject: [pytest-dev] pytest-commit emails In-Reply-To: References: Message-ID: Oh OK, thanks Floris. Weird, I don't receive emails from Ronny it seems, he merged some PRs this week but I didn't receive anything. Cheers, Bruno. On Fri, Mar 16, 2018 at 3:18 PM Floris Bruynooghe wrote: > Bruno Oliveira writes: > > > On Fri, Mar 16, 2018 at 5:23 AM Floris Bruynooghe > wrote: > > > >> Bruno Oliveira writes: > >> > >> > Hi everyone, > >> > > >> > I receive an email from pytest-commit at python.org whenever new merges > >> happen > >> > on a branch of the pytest repository, but it seems it happens only > for my > >> > own merges. It was my understanding I should receive for any merge, > not > >> > just my own. > >> > > >> > Does this happen for anybody else or is it just me? > >> > >> Hmm, the last email there not from you which I see is from me in May > >> last year... It's plausible that that's the last time I merged > >> something though. But I'd have guessed that at least Ronny might have > >> merged things as well since then? > >> > >> The configuration on github looks fine to me, maybe the list > >> administrator (Holger?) could check the mailing list settings? > >> > > > > I just merged a PR and received an email with the subject: > > > > [Pytest-commit] [pytest-dev/pytest] 3c3fc3: Add test capturing new > > expectation. Ref #3314. > > > > Did anybody else receive this? > > Yes, I received this as well. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicoddemus at gmail.com Fri Mar 16 14:59:32 2018 From: nicoddemus at gmail.com (Bruno Oliveira) Date: Fri, 16 Mar 2018 18:59:32 +0000 Subject: [pytest-dev] Fixture ordering and scopes In-Reply-To: References: Message-ID: On Fri, Mar 16, 2018 at 2:36 PM Floris Bruynooghe wrote: > Bruno Oliveira writes: > > > So the order is important here, and leaving it undefined will require > > people to write this instead: > > > > @pytest.fixture(scope='session', autouse=True) > > def my_setup_logging(log_setup): pass > > > > @pytest.fixture(scope='session', autouse=True) > > def my_db_setup(my_setup_logging ): pass > > > > While it works, it feels like unnecessary boilerplate. > > I'm not sure you finished your example there. I'm actually curious what > you'd do. > I did finish it :) Because `my_db_setup` depends on `my_setup_logging`, this guarantees that they will execute in the correct order. It is a great example, it really stretches the composability of > fixtures. If someone where to ask me this though I think the only thing > I'd come up with is this: > > import db > > @pytest.fixture(scope='session', autouse=True) > def my_db(log_setup): > return db.db_setup() # This needs to do the right thing. > > This is fine if `db_setup` is a fixture which doesn't depend on others, but if `db_setup` depends on other fixtures (say `tmpdir`) then this won't work so well. Cheers, Bruno. -------------- next part -------------- An HTML attachment was scrubbed... URL: From 03sjbrown at gmail.com Fri Mar 16 15:15:12 2018 From: 03sjbrown at gmail.com (Shawn Brown) Date: Fri, 16 Mar 2018 15:15:12 -0400 Subject: [pytest-dev] Custom reporting for asserts without comparison operators? Message-ID: I understand how to use pytest_assertrepr_compare() to return custom assertion reports--but this interface requires a comparison operator. I'm hoping to write a plugin that makes custom reports for statements like: assert myfunc(myobj) Where myfunc() returns True or False. But without an operator (e.g. "=="), I can't intercept the results to build my custom report. While I *could* use a comparison operator, doing so is semantically wrong for most of my use cases! And in the spirit of pytest and Pythonic code, I'm hoping to handle the "assert myfunc(myobj)" case as written because it's clean and caters to the test writer. I investigated the idea of trying to manipulate the AST before the tests are executed by using other pytest-supported hooks but this doesn't seem possible. I also looked at the idea of subclassing pytests's AssertionRewriter but there doesn't seem to be a mechanism for supporting this within the context of non-invasive plugin behavior--it looks like I would have to monkey patch pytest itself which isn't a route I want to take. I've also looked at returning a subclassed False instance that contains my report info and uses a custom repr for displaying it. But this was too much of a hack and caused problems. My current current "solution" is to have an assert_myfunc() function that raises errors directly so the statements read "assert_myfunc(myobj)". But I'd really like to use Python's native assert for this. Are there other approaches I might take to rewrite an assertion report for statements like "assert myfunc(myobj)" in a pytest-friendly/non-fragile way? -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicoddemus at gmail.com Fri Mar 16 15:25:32 2018 From: nicoddemus at gmail.com (Bruno Oliveira) Date: Fri, 16 Mar 2018 19:25:32 +0000 Subject: [pytest-dev] License for pytest-dev/pytest-design Message-ID: Hi folks, pytest-design is the repository used to store logo and t-shirt designs from our 2016 sprint, but it currently it isn't under any LICENSE. I've been asked if it was OK to use the logos to make a t-shirt (and of course I know it is), but we should add a license to the repository allowing just that. What license should we add? Cheers, Bruno. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicoddemus at gmail.com Fri Mar 16 18:19:42 2018 From: nicoddemus at gmail.com (Bruno Oliveira) Date: Fri, 16 Mar 2018 22:19:42 +0000 Subject: [pytest-dev] No traceback filtering with __tracebackhide__ attribute using custom pytest plugin In-Reply-To: References: Message-ID: Hi Ringo, I would expect the basic traceback filtering to work via the runner, > irrespective of which custom tests a thirdparty plugin provides. What do I > consider basic filtering? Filtering out the traceback entries of the runner > as defined in python.py:filter_traceback function (pluggy, etc.) as well as > the default filter function regarding __tracebackhide__. This is exactly > what I activated via my custom _prunetraceback method. > > May I suggest to push this basic filtering higher in that class hierarchy? > You will probably know better where that is/should be, but I guess in class > Item would be a good fit. Should I file this as an improvement on Github? > I think it makes sense; Node already has a method repr_failure and _prunetraceback, so it already knows the concept of a traceback; my initial take is that Node should *not* know about that at all, neither Item: they are abstract objects that can represent anything that pytest can collect and run, even things which may not produce a traceback upon failure. But I digress, given the current state of things it seems to make sense to push basic filter to Item as you suggest. Cheers, Bruno. ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From flub at devork.be Fri Mar 16 19:01:50 2018 From: flub at devork.be (Floris Bruynooghe) Date: Sat, 17 Mar 2018 00:01:50 +0100 Subject: [pytest-dev] License for pytest-dev/pytest-design In-Reply-To: References: Message-ID: Bruno Oliveira writes: > Hi folks, > > pytest-design is the repository used to store logo and t-shirt designs from > our 2016 sprint, but it currently it isn't under any LICENSE. Whoops, that's pretty terrible! > I've been > asked if it was OK to use the logos to make a t-shirt (and of course I know > it is), but we should add a license to the repository allowing just that. Does it make sense to put it under MIT as well? I'm not sure I'd want the attribution part of Creative Commons license here as that's somewhat hard with stickers or t-shirts. It seems currently it's Vasily and me who committed to the repo, though Florian also contributed at the sprint IIRC. Which I guess means us three have to agree to whatever license we end up with. So I'll propose MIT as I don't know any better :-) Cheers, Floris From kvas.it at gmail.com Sat Mar 17 06:03:47 2018 From: kvas.it at gmail.com (Vasily Kuznetsov) Date: Sat, 17 Mar 2018 10:03:47 +0000 Subject: [pytest-dev] License for pytest-dev/pytest-design In-Reply-To: References: Message-ID: I'm ok with MIT although I would think that some kind of CC would be more appropriate in this case. I'm also not very sure about this though... As for contributors, Brianna also contributed quite a lot during the sprint and "asserts before reverts" motto is her idea if I remember it right. Cheers, Vasily On Sat, Mar 17, 2018 at 12:02 AM Floris Bruynooghe wrote: > Bruno Oliveira writes: > > > Hi folks, > > > > pytest-design is the repository used to store logo and t-shirt designs > from > > our 2016 sprint, but it currently it isn't under any LICENSE. > > Whoops, that's pretty terrible! > > > I've been > > asked if it was OK to use the logos to make a t-shirt (and of course I > know > > it is), but we should add a license to the repository allowing just that. > > Does it make sense to put it under MIT as well? I'm not sure I'd want > the attribution part of Creative Commons license here as that's somewhat > hard with stickers or t-shirts. > > > It seems currently it's Vasily and me who committed to the repo, though > Florian also contributed at the sprint IIRC. Which I guess means us > three have to agree to whatever license we end up with. So I'll propose > MIT as I don't know any better :-) > > Cheers, > Floris > _______________________________________________ > pytest-dev mailing list > pytest-dev at python.org > https://mail.python.org/mailman/listinfo/pytest-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From flub at devork.be Sat Mar 17 07:25:29 2018 From: flub at devork.be (Floris Bruynooghe) Date: Sat, 17 Mar 2018 12:25:29 +0100 Subject: [pytest-dev] pytest-commit emails In-Reply-To: References: Message-ID: Bruno Oliveira writes: > Oh OK, thanks Floris. > > Weird, I don't receive emails from Ronny it seems, he merged some PRs this > week but I didn't receive anything. Neither did I... From patrickdepinguin at gmail.com Sat Mar 17 15:52:05 2018 From: patrickdepinguin at gmail.com (Thomas De Schampheleire) Date: Sat, 17 Mar 2018 20:52:05 +0100 Subject: [pytest-dev] Parametrized autouse fixtures In-Reply-To: References: Message-ID: Hi, I now prepared the following PR for Kallithea to use these parametrized fixtures: https://bitbucket.org/conservancy/kallithea/pull-requests/389/tests-vcs-automatic-parametrization/diff but I have a question: The parametrized fixture is set up here in a base class: https://bitbucket.org/conservancy/kallithea/pull-requests/389/tests-vcs-automatic-parametrization/diff#Lkallithea/tests/vcs/base.pyT55 and has two parameters 'hg' and 'git'. Most test classes indeed need to be parametrized this way, but some should only be run for one of the parameters, either hg or git. So far I handled this with a fixture inside the special classes, skipping tests as required: https://bitbucket.org/conservancy/kallithea/pull-requests/389/tests-vcs-automatic-parametrization/diff#Lkallithea/tests/vcs/test_git.pyT649 + @pytest.fixture(autouse=True) + def _skip_unsupported_scm(self): + if self.backend_class.scm != 'git': + pytest.skip('Unsupported scm for this test: %s' % self.backend_class.scm) but I wonder if there is a better way to do this, avoiding following two issues: 1. the above fixture needs to be duplicated for each test class. For a fixture that does not need 'self', one could put it in conftest.py and use @pytest.mark.usefixtures() to avoid the duplication, but it is my understanding that this is not possible if you need access to self. 2. the 'skips' are shown in the test overview, which is not very useful in this case IMO because it is normal that they are skipped, i.e. it is not due to an environment mismatch, or a temporarily skipped testcase. It will always be skipped. Thanks, Thomas 2018-03-13 23:05 GMT+01:00 Thomas De Schampheleire : > Thanks a lot, this did help me very well! > > /Thomas > > 2018-03-12 22:16 GMT+01:00 Bruno Oliveira : >> Hi Thomas, >> >> It seems the problem is that you are mixing xunit-style fixtures (by >> implementing a setup_class classmethod) and pytest-style fixtures: they >> currently don?t play well together, setup_* methods execute before all other >> fixtures (see #517). There are plans to fix that, but this scheduled for 3.6 >> only. >> >> But there?s an easy workaround, just change your ?setup_class? method into a >> proper fixture instead: >> >> class TestArchivesTestCaseMixin: >> >> @classmethod >> @pytest.fixture(autouse=True, params=['hg', 'git'], scope='class') >> def _configure_backend(cls, request): >> backend_alias = request.param >> Backend = vcs.get_backend(backend_alias) >> ... >> >> Hope that helps. >> >> Cheers, >> Bruno. >> >> >> On Mon, Mar 12, 2018 at 5:48 PM Thomas De Schampheleire >> wrote: >>> >>> Hello, >>> >>> The Kallithea project is a repository hosting and review system, >>> currently supporting git and hg. We currently have some test cases >>> that need to be run for these two version control systems. >>> >>> Previously this was done with some Python magic, which was now >>> simplified and made more explicit in commit: >>> >>> https://kallithea-scm.org/repos/kallithea/changeset/45a281a0f36ff59ffaa4aa0107fabfc1a6310251 >>> >>> but I assume it can be made more automatic with pytest fixtures. I was >>> thinking to use an autouse, parametrized fixture to set the backend to >>> 'hg' and 'git' respectively. But I can't make it work. >>> >>> Here is the change I did on top of the mentioned commit: >>> >>> diff --git a/kallithea/tests/vcs/base.py b/kallithea/tests/vcs/base.py >>> --- a/kallithea/tests/vcs/base.py >>> +++ b/kallithea/tests/vcs/base.py >>> @@ -5,6 +5,7 @@ InMemoryChangeset class is working prope >>> import os >>> import time >>> import datetime >>> +import pytest >>> >>> from kallithea.lib import vcs >>> from kallithea.lib.vcs.nodes import FileNode >>> @@ -27,6 +28,11 @@ class _BackendTestMixin(object): >>> """ >>> recreate_repo_per_test = True >>> >>> + @pytest.fixture(autouse=True, >>> + params=['hg', 'git']) >>> + def set_backend_alias(cls, request): >>> + cls.backend_alias = request.param >>> + >>> @classmethod >>> def get_backend(cls): >>> return vcs.get_backend(cls.backend_alias) >>> diff --git a/kallithea/tests/vcs/test_archives.py >>> b/kallithea/tests/vcs/test_archives.py >>> --- a/kallithea/tests/vcs/test_archives.py >>> +++ b/kallithea/tests/vcs/test_archives.py >>> @@ -14,7 +14,7 @@ from kallithea.tests.vcs.base import _Ba >>> from kallithea.tests.vcs.conf import TESTS_TMP_PATH >>> >>> >>> -class ArchivesTestCaseMixin(_BackendTestMixin): >>> +class TestArchivesTestCaseMixin(_BackendTestMixin): >>> >>> @classmethod >>> def _get_commits(cls): >>> @@ -95,11 +95,3 @@ class ArchivesTestCaseMixin(_BackendTest >>> def test_archive_prefix_with_leading_slash(self): >>> with pytest.raises(VCSError): >>> self.tip.fill_archive(prefix='/any') >>> - >>> - >>> -class TestGitArchive(ArchivesTestCaseMixin): >>> - backend_alias = 'git' >>> - >>> - >>> -class TestHgArchive(ArchivesTestCaseMixin): >>> - backend_alias = 'hg' >>> >>> >>> >>> but when running this I get: >>> >>> $ pytest kallithea/tests/vcs/test_archives.py >>> Test session starts (platform: linux2, Python 2.7.14, pytest 3.4.2, >>> pytest-sugar 0.9.1) >>> benchmark: 3.1.1 (defaults: timer=time.time disable_gc=False >>> min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 >>> warmup=False warmup_iterations=100000) >>> rootdir: /home/tdescham/repo/contrib/kallithea/kallithea-review, >>> inifile: pytest.ini >>> plugins: sugar-0.9.1, localserver-0.4.1, benchmark-3.1.1 >>> >>> >>> ???????????????????????????????????????????????????????????????????????? >>> ERROR at setup of TestArchivesTestCaseMixin.test_archive_zip[hg] >>> ????????????????????????????????????????????????????????????????????????? >>> kallithea/tests/vcs/base.py:70: in setup_class >>> Backend = cls.get_backend() >>> kallithea/tests/vcs/base.py:38: in get_backend >>> return vcs.get_backend(cls.backend_alias) >>> E AttributeError: type object 'TestArchivesTestCaseMixin' has no >>> attribute 'backend_alias' >>> >>> >>> 7% ? >>> >>> ???????????????????????????????????????????????????????????????????????? >>> ERROR at setup of TestArchivesTestCaseMixin.test_archive_zip[git] >>> ???????????????????????????????????????????????????????????????????????? >>> kallithea/tests/vcs/base.py:70: in setup_class >>> Backend = cls.get_backend() >>> kallithea/tests/vcs/base.py:38: in get_backend >>> return vcs.get_backend(cls.backend_alias) >>> E AttributeError: type object 'TestArchivesTestCaseMixin' has no >>> attribute 'backend_alias' >>> >>> >>> 14% ?? >>> >>> [..] >>> >>> >>> >>> So the parametrization seems to work because each test is run twice, >>> but I can't seem to find the right way to set the backend_alias, such >>> that a later call from a classmethod works correctly. >>> >>> I found some possibly useful magic by nicoddemus but could not make >>> that work either: >>> https://github.com/pytest-dev/pytest/issues/2618#issuecomment-318519875 >>> >>> Is it possible to achieve what I want, without changing each test >>> method to take a fixture explicitly? >>> >>> Thanks, >>> Thomas >>> _______________________________________________ >>> pytest-dev mailing list >>> pytest-dev at python.org >>> https://mail.python.org/mailman/listinfo/pytest-dev From flub at devork.be Sat Mar 17 17:04:08 2018 From: flub at devork.be (Floris Bruynooghe) Date: Sat, 17 Mar 2018 22:04:08 +0100 Subject: [pytest-dev] Custom reporting for asserts without comparison operators? In-Reply-To: References: Message-ID: Hi Shawn, Shawn Brown <03sjbrown at gmail.com> writes: > I understand how to use pytest_assertrepr_compare() to return custom > assertion reports--but this interface requires a comparison operator. I'm > hoping to write a plugin that makes custom reports for statements like: > > assert myfunc(myobj) > > Where myfunc() returns True or False. But without an operator (e.g. "=="), > I can't intercept the results to build my custom report. After refreshing my mind on the AST-rewriting code again -that's always tricky- I think you have been looking into the wrong direction. I believe something as simple as this might do: def test_foo(): assert myfunc(42) def myfunc(*args): __tracebackhide__ if not args: pytest.fail('Hello there\nLook, a multi-line message') raise AssertionError('multi\nline') # alternative, never gets here return True Does this solve your usecase? Cheers, Floris From flub at devork.be Sat Mar 17 17:38:10 2018 From: flub at devork.be (Floris Bruynooghe) Date: Sat, 17 Mar 2018 22:38:10 +0100 Subject: [pytest-dev] Fixture ordering and scopes In-Reply-To: References: Message-ID: Bruno Oliveira writes: > On Fri, Mar 16, 2018 at 2:36 PM Floris Bruynooghe wrote: > >> Bruno Oliveira writes: >> >> > So the order is important here, and leaving it undefined will require >> > people to write this instead: >> > >> > @pytest.fixture(scope='session', autouse=True) >> > def my_setup_logging(log_setup): pass >> > >> > @pytest.fixture(scope='session', autouse=True) >> > def my_db_setup(my_setup_logging ): pass >> > >> > While it works, it feels like unnecessary boilerplate. >> >> I'm not sure you finished your example there. I'm actually curious what >> you'd do. >> > > I did finish it :) > > Because `my_db_setup` depends on `my_setup_logging`, this guarantees that > they will execute in the correct order. I'm still not following this, I'm probably being silly. You have 4 autouse session-scoped fixtures but with a dependecy chain as my_db_setup -> my_setup_logging -> setup_logging and then an unrelated db_setup. What am I missing here? > It is a great example, it really stretches the composability of >> fixtures. If someone where to ask me this though I think the only thing >> I'd come up with is this: >> >> import db >> >> @pytest.fixture(scope='session', autouse=True) >> def my_db(log_setup): >> return db.db_setup() # This needs to do the right thing. >> >> > This is fine if `db_setup` is a fixture which doesn't depend on others, but > if `db_setup` depends on other fixtures (say `tmpdir`) then this won't work > so well. True. I'm not sure I have a good answer to this :-) Cheers, Floris From flub at devork.be Sat Mar 17 17:30:20 2018 From: flub at devork.be (Floris Bruynooghe) Date: Sat, 17 Mar 2018 22:30:20 +0100 Subject: [pytest-dev] Parametrized autouse fixtures In-Reply-To: References: Message-ID: Thomas De Schampheleire writes: > Hi, > > I now prepared the following PR for Kallithea to use these > parametrized fixtures: > https://bitbucket.org/conservancy/kallithea/pull-requests/389/tests-vcs-automatic-parametrization/diff > but I have a question: > > The parametrized fixture is set up here in a base class: > https://bitbucket.org/conservancy/kallithea/pull-requests/389/tests-vcs-automatic-parametrization/diff#Lkallithea/tests/vcs/base.pyT55 > and has two parameters 'hg' and 'git'. > Most test classes indeed need to be parametrized this way, but some > should only be run for one of the parameters, either hg or git. > > So far I handled this with a fixture inside the special classes, > skipping tests as required: > https://bitbucket.org/conservancy/kallithea/pull-requests/389/tests-vcs-automatic-parametrization/diff#Lkallithea/tests/vcs/test_git.pyT649 > + @pytest.fixture(autouse=True) > + def _skip_unsupported_scm(self): > + if self.backend_class.scm != 'git': > + pytest.skip('Unsupported scm for this test: %s' % > self.backend_class.scm) > > but I wonder if there is a better way to do this, avoiding following two issues: > 1. the above fixture needs to be duplicated for each test class. For a > fixture that does not need 'self', one could put it in conftest.py and > use @pytest.mark.usefixtures() to avoid the duplication, but it is my > understanding that this is not possible if you need access to self. > 2. the 'skips' are shown in the test overview, which is not very > useful in this case IMO because it is normal that they are skipped, > i.e. it is not due to an environment mismatch, or a temporarily > skipped testcase. It will always be skipped. Firstly a disclaimer that I haven't wrapped my head around all your code or followed the previous history of this thread. So my suggestion may be somewhat off. I'd be quite tempted to try and solve this with a mark. E.g. you can mark tests with @pytest.mark.vcs, or @pytest.mark.vcs('git', 'hg') or @pytest.mark.vcs('git') or @pytest.mark.vcs('hg') - I imagine the first two mean the same thing. You can mark whole classes, modules or individual tests. You can probably even mix them and Ronny can doubtless explain all the corner cases and pitfalls you encounter by mixing these. But that should probably not deter you too much. Anyway, once your tests are marked you can either use this mark info directly into your existing autouse fixture. But as you say you'll then get skips which really shouldn't have been tests at all. So you can also implement your own pytest_generate_tests hook which uses the mark info to generate the correct tests only. Then the autouse fixture only needs to check if a tests needs a backend and if so initialise it. I'll again add a disclaimer that this is what I'd try and build. I haven't done that, so this might break down somewhere. Cheers, Floris From 03sjbrown at gmail.com Sun Mar 18 17:09:15 2018 From: 03sjbrown at gmail.com (Shawn Brown) Date: Sun, 18 Mar 2018 17:09:15 -0400 Subject: [pytest-dev] Custom reporting for asserts without comparison operators? In-Reply-To: References: Message-ID: Hi Floris and thanks for the feedback. Unfortunately, this does not solve my usecase. I'm trying to handle cases where the following statement would pass: assert myfunc(failing_input) == False But where this next statement would fail using my custom report: assert myfunc(failing_input) Calling myfunc() needs to return True or False (or at least Truthy or Falsy)--this is locked-in behavior. The assert_myfunc() in my original post is a wrapper that does basically the same thing as your example. Although I omitted the native assert in my examples because it is only evaluated when it's already guaranteed to pass. On Sat, Mar 17, 2018 at 5:04 PM, Floris Bruynooghe wrote: > Hi Shawn, > > Shawn Brown <03sjbrown at gmail.com> writes: > > > I understand how to use pytest_assertrepr_compare() to return custom > > assertion reports--but this interface requires a comparison operator. I'm > > hoping to write a plugin that makes custom reports for statements like: > > > > assert myfunc(myobj) > > > > Where myfunc() returns True or False. But without an operator (e.g. > "=="), > > I can't intercept the results to build my custom report. > > After refreshing my mind on the AST-rewriting code again -that's always > tricky- I think you have been looking into the wrong direction. I > believe something as simple as this might do: > > def test_foo(): > assert myfunc(42) > > def myfunc(*args): > __tracebackhide__ > if not args: > pytest.fail('Hello there\nLook, a multi-line message') > raise AssertionError('multi\nline') # alternative, never gets here > return True > > Does this solve your usecase? > > Cheers, > Floris > -------------- next part -------------- An HTML attachment was scrubbed... URL: From 03sjbrown at gmail.com Sun Mar 18 17:11:48 2018 From: 03sjbrown at gmail.com (Shawn Brown) Date: Sun, 18 Mar 2018 17:11:48 -0400 Subject: [pytest-dev] Custom reporting for asserts without comparison operators? In-Reply-To: References: Message-ID: If I could somehow trigger pytest_assertrepr_compare() so it would receive op=None, left=myresult, right=None (or something similar), then I could handle failing cases by getting the needed information from a property of a Falsy return value. I'm also wondering: might it be possible to use pytest_addhooks() to add a hook to modify the AST or even modify the pre-parsed source? If I could automatically re-write "assert myfunc(myobj)" but leave other cases unchanged, this would clean things up for me. Although as I mentioned earlier, I don't see a way to interact with the AST or the AssertionRewriter within pytest's plugin system. -------------- next part -------------- An HTML attachment was scrubbed... URL: From flub at devork.be Mon Mar 19 10:03:59 2018 From: flub at devork.be (Floris Bruynooghe) Date: Mon, 19 Mar 2018 15:03:59 +0100 Subject: [pytest-dev] Custom reporting for asserts without comparison operators? In-Reply-To: References: Message-ID: On Sun, Mar 18 2018, Shawn Brown wrote: > Unfortunately, this does not solve my usecase. I'm trying to handle cases > where the following statement would pass: > > assert myfunc(failing_input) == False > > But where this next statement would fail using my custom report: > > assert myfunc(failing_input) > > Calling myfunc() needs to return True or False (or at least Truthy or > Falsy)--this is locked-in behavior. I'm not sure if this is compatible with Python's semantics really. If I understand correctly you're asking for a full-on macro implementation on Python or something. Which in theory you could do with an AST NodeVisitor, but really Python isn't made for this -- sounds like you'd enjoy lisp! ;-) The best thing I can suggest is to make use of the:: assert myfunc(failing_input), repr(myfunc(failing_input())) functionality to also get a custom error message. Here your myfunc() whould have to return some object which both implements __bool__ as well as __repr__ I guess. Maybe there's a feature request in here for something like this:: class Foo: def __bool__(self): return False def __repr__(self): return 'multiline\nstring' assert Foo() To actually show the repr in the error message, which it currently doesn't. I'd like to know what other people think of such a feature though, and haven't thought through all the implications yet. But I'm curious, would something like that solve your case? Cheers, Floris From holger at merlinux.eu Mon Mar 19 10:13:06 2018 From: holger at merlinux.eu (holger krekel) Date: Mon, 19 Mar 2018 15:13:06 +0100 Subject: [pytest-dev] Custom reporting for asserts without comparison operators? In-Reply-To: References: Message-ID: <20180319141306.GQ4712@beto> On Mon, Mar 19, 2018 at 15:03 +0100, Floris Bruynooghe wrote: > On Sun, Mar 18 2018, Shawn Brown wrote: > > Unfortunately, this does not solve my usecase. I'm trying to handle cases > > where the following statement would pass: > > > > assert myfunc(failing_input) == False > > > > But where this next statement would fail using my custom report: > > > > assert myfunc(failing_input) > > > > Calling myfunc() needs to return True or False (or at least Truthy or > > Falsy)--this is locked-in behavior. > > I'm not sure if this is compatible with Python's semantics really. If I > understand correctly you're asking for a full-on macro implementation on > Python or something. Which in theory you could do with an AST > NodeVisitor, but really Python isn't made for this -- sounds like you'd > enjoy lisp! ;-) > > The best thing I can suggest is to make use of the:: > > assert myfunc(failing_input), repr(myfunc(failing_input())) i wonder if one could try to rewrite the ast for "assert myfunc(x)" to "assert __pytest_funcrepr_helper(myfunc(x), 'myfunc(x)')" with something like: class __pytest_funcrepr_helper: def __init__(self, val, source): self.val = val self.source = source def __bool__(self): return bool(self.val) def __repr__(self): return "{!r} returned non-true {!r}".format(self.source, self.val) but maybe i am not grasping all details involved. It's been a while since i looked into ast-rewriting ... holger > functionality to also get a custom error message. Here your myfunc() > whould have to return some object which both implements __bool__ as well > as __repr__ I guess. > > Maybe there's a feature request in here for something like this:: > > class Foo: > def __bool__(self): > return False > > def __repr__(self): > return 'multiline\nstring' > > assert Foo() > > To actually show the repr in the error message, which it currently > doesn't. I'd like to know what other people think of such a feature > though, and haven't thought through all the implications yet. But I'm > curious, would something like that solve your case? > > Cheers, > Floris > _______________________________________________ > pytest-dev mailing list > pytest-dev at python.org > https://mail.python.org/mailman/listinfo/pytest-dev From rpfannsc at redhat.com Mon Mar 19 12:57:43 2018 From: rpfannsc at redhat.com (Ronny Pfannschmidt) Date: Mon, 19 Mar 2018 17:57:43 +0100 Subject: [pytest-dev] mark fix-up major milestone, please review Message-ID: Hi everyone, in https://github.com/pytest-dev/pytest/pull/3317 i introduced a new way to store marks and aslo consistently use it (this fixed a few bugs pytest dragged along since years) some of the things i had to do is introduce a internal FunctionDefinition node that's currently just being used by the collectors to give metafunc correct access to marks. i'm hoping to eventually elevate that in future, but the amount of spaghetti makes that a major undertaking, pytest is just so tightly coupled all over the place that it is suffocating and really error inducing I also did deprecate markinfo attributes, so everyone using them will get deprecation warnings. as far as i'm concerned, markinfo attributes where never ever a correct way to handle distinct marks and quite a few bugs in pytest came from using combined marks in markinfo instead of distinct marks. i also changed get_marker, and starting to realize that i need to make it return a MarkInfo again (at least one with correct data this time around) (that's upcoming in a few) other than that please take a look and find details to scrutinize -- Ronny -- Red Hat GmbH, http://www.de.redhat.com/, Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Michael Cunningham, Michael O'Neill, Eric Shander -------------- next part -------------- An HTML attachment was scrubbed... URL: From opensource at ronnypfannschmidt.de Mon Mar 19 14:02:55 2018 From: opensource at ronnypfannschmidt.de (RonnyPfannschmidt) Date: Mon, 19 Mar 2018 19:02:55 +0100 Subject: [pytest-dev] Custom reporting for asserts without comparison operators? In-Reply-To: <20180319141306.GQ4712@beto> References: <20180319141306.GQ4712@beto> Message-ID: hi everyone, **re-sent using the other sending address** this is just about single value assertion helpers i logged an feature request about that a few year back see https://github.com/pytest-dev/pytest/issues/95 - so basically this use-case was known since 2011 ^^ and doesn't require ast rewriting lice macros, just proper engineering of the representation and handling of single values in the assertion rewriter. -- Ronny Am 19.03.2018 um 15:13 schrieb holger krekel: > On Mon, Mar 19, 2018 at 15:03 +0100, Floris Bruynooghe wrote: >> On Sun, Mar 18 2018, Shawn Brown wrote: >>> Unfortunately, this does not solve my usecase. I'm trying to handle cases >>> where the following statement would pass: >>> >>> assert myfunc(failing_input) == False >>> >>> But where this next statement would fail using my custom report: >>> >>> assert myfunc(failing_input) >>> >>> Calling myfunc() needs to return True or False (or at least Truthy or >>> Falsy)--this is locked-in behavior. >> I'm not sure if this is compatible with Python's semantics really. If I >> understand correctly you're asking for a full-on macro implementation on >> Python or something. Which in theory you could do with an AST >> NodeVisitor, but really Python isn't made for this -- sounds like you'd >> enjoy lisp! ;-) >> >> The best thing I can suggest is to make use of the:: >> >> assert myfunc(failing_input), repr(myfunc(failing_input())) > i wonder if one could try to rewrite the ast for "assert myfunc(x)" to > "assert __pytest_funcrepr_helper(myfunc(x), 'myfunc(x)')" with something like: > > class __pytest_funcrepr_helper: > def __init__(self, val, source): > self.val = val > self.source = source > def __bool__(self): > return bool(self.val) > def __repr__(self): > return "{!r} returned non-true {!r}".format(self.source, self.val) > > but maybe i am not grasping all details involved. It's been a while since > i looked into ast-rewriting ... > > holger > > >> functionality to also get a custom error message. Here your myfunc() >> whould have to return some object which both implements __bool__ as well >> as __repr__ I guess. >> >> Maybe there's a feature request in here for something like this:: >> >> class Foo: >> def __bool__(self): >> return False >> >> def __repr__(self): >> return 'multiline\nstring' >> >> assert Foo() >> >> To actually show the repr in the error message, which it currently >> doesn't. I'd like to know what other people think of such a feature >> though, and haven't thought through all the implications yet. But I'm >> curious, would something like that solve your case? >> >> Cheers, >> Floris >> _______________________________________________ >> pytest-dev mailing list >> pytest-dev at python.org >> https://mail.python.org/mailman/listinfo/pytest-dev > _______________________________________________ > pytest-dev mailing list > pytest-dev at python.org > https://mail.python.org/mailman/listinfo/pytest-dev From flub at devork.be Tue Mar 20 04:18:17 2018 From: flub at devork.be (Floris Bruynooghe) Date: Tue, 20 Mar 2018 09:18:17 +0100 Subject: [pytest-dev] mark fix-up major milestone, please review In-Reply-To: References: Message-ID: Hi Ronny, On Mon, Mar 19 2018, Ronny Pfannschmidt wrote: > Hi everyone, > > in https://github.com/pytest-dev/pytest/pull/3317 i introduced a new way to > store marks and aslo consistently use it (this fixed a few bugs pytest > dragged along since years) I've spent a while reading the PR but to be honest it's kind of hard to know how sane things are. It looks ok, but I don't have a view of the larger picture. Are there somewhere some notes about where the marks come from, what the problems with them are and the vision of how the problems are going to be solved and what the end-state will look like? For example from the current PR it's not obvious how users are meant to be using marks. You still apply them using @pytest.mark.name as before, but how are we supposed to access them? How are we setting them pragmatically? > I also did deprecate markinfo attributes, > so everyone using them will get deprecation warnings. That's a lot of people probably. How long are we giving users for this? > as far as i'm concerned, markinfo attributes where never ever a correct way > to handle distinct marks and quite a few bugs in pytest came from using > combined marks in markinfo instead of distinct marks. A short summary of these issues would help loads :) Cheers, Floris From rpfannsc at redhat.com Tue Mar 20 06:05:35 2018 From: rpfannsc at redhat.com (Ronny Pfannschmidt) Date: Tue, 20 Mar 2018 11:05:35 +0100 Subject: [pytest-dev] mark fix-up major milestone, please review In-Reply-To: References: Message-ID: 2018-03-20 9:18 GMT+01:00 Floris Bruynooghe : > Hi Ronny, > > On Mon, Mar 19 2018, Ronny Pfannschmidt wrote: > > > Hi everyone, > > > > in https://github.com/pytest-dev/pytest/pull/3317 i introduced a new > way to > > store marks and aslo consistently use it (this fixed a few bugs pytest > > dragged along since years) > > I've spent a while reading the PR but to be honest it's kind of hard to > know how sane things are. It looks ok, but I don't have a view of the > larger picture. > > Are there somewhere some notes about where the marks come from, what the > problems with them are and the vision of how the problems are going to > be solved and what the end-state will look like? > > For example from the current PR it's not obvious how users are meant to > be using marks. You still apply them using @pytest.mark.name as before, > but how are we supposed to access them? How are we setting them > pragmatically? > ?the way for declaring them is still the same the addmarker apis still work the same? > > > I also did deprecate markinfo attributes, > > so everyone using them will get deprecation warnings. > > That's a lot of people probably. How long are we giving users for this? > ? > ?the support for accessing them can be kept for quite a while, everyone using them will be told to use the new api i would like to remove it at the beginning of 2019 ? > > > > as far as i'm concerned, markinfo attributes where never ever a correct > way > > to handle distinct marks and quite a few bugs in pytest came from using > > combined marks in markinfo instead of distinct marks. > > A short summary of these issues would help loads :) > ?basically MarkInfo objects are subject to marker transfer bugs, marker smearing and everything wrong with marks these days they are one of holgers infamous minimal changes that basically ensured we never ever had correctly working marks from the time we had support for more than one marker of the same name, and it got worse by the addition of marker transfers (for context - pytest marks started as thing to update test function __dict__ with its keyword arguments, they never got de-tangled from that messy legacy, and that was a major source of really bad bugs) also they are part of a inconsistent return value issues that previously plaqued get_marker?, now get_markers returns a reasonably correct MarkInfo and find_markers is finally an api that provides basic correct values cheers, Ronny > > > Cheers, > Floris > -- Red Hat GmbH, http://www.de.redhat.com/, Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Michael Cunningham, Michael O'Neill, Eric Shander -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicoddemus at gmail.com Tue Mar 20 07:02:22 2018 From: nicoddemus at gmail.com (Bruno Oliveira) Date: Tue, 20 Mar 2018 11:02:22 +0000 Subject: [pytest-dev] Fixture ordering and scopes In-Reply-To: References: Message-ID: On Sat, Mar 17, 2018 at 6:38 PM Floris Bruynooghe wrote: > > I'm still not following this, I'm probably being silly. You have 4 > autouse session-scoped fixtures but with a dependecy chain as > my_db_setup -> my_setup_logging -> setup_logging and then an unrelated > db_setup. What am I missing here? > Oh my bad, db_setup should be a dependency of my_db_setup: @pytest.fixture(scope='session', autouse=True) def my_setup_logging(log_setup): pass @pytest.fixture(scope='session', autouse=True) def my_db_setup(my_setup_logging, db_setup): pass Sorry for the mistake! Cheers, Bruno. -------------- next part -------------- An HTML attachment was scrubbed... URL: From flub at devork.be Tue Mar 20 09:45:47 2018 From: flub at devork.be (Floris Bruynooghe) Date: Tue, 20 Mar 2018 14:45:47 +0100 Subject: [pytest-dev] Fixture ordering and scopes In-Reply-To: References: Message-ID: On Tue, Mar 20 2018, Bruno Oliveira wrote: > On Sat, Mar 17, 2018 at 6:38 PM Floris Bruynooghe wrote: > >> >> I'm still not following this, I'm probably being silly. You have 4 >> autouse session-scoped fixtures but with a dependecy chain as >> my_db_setup -> my_setup_logging -> setup_logging and then an unrelated >> db_setup. What am I missing here? >> > > Oh my bad, db_setup should be a dependency of my_db_setup: > > @pytest.fixture(scope='session', autouse=True) > def my_setup_logging(log_setup): pass > > @pytest.fixture(scope='session', autouse=True) > def my_db_setup(my_setup_logging, db_setup): pass To me this looks like it still depends on the order of fixtures being used. the end now depends on two chains: my_setup_logging -> log_setup and db_setup (single-item-chain). But there's nothing, other then the rule under discussion, which prioritises one chain over the other. Mind you, I'm by now perfectly fine with the rule. I'm only discussing it to understand the semantics of the world without that rule as just a curious exercise. Cheers, Floris From nicoddemus at gmail.com Tue Mar 20 09:57:39 2018 From: nicoddemus at gmail.com (Bruno Oliveira) Date: Tue, 20 Mar 2018 13:57:39 +0000 Subject: [pytest-dev] Fixture ordering and scopes In-Reply-To: References: Message-ID: On Tue, Mar 20, 2018 at 10:45 AM Floris Bruynooghe wrote: > On Tue, Mar 20 2018, Bruno Oliveira wrote: > > > > @pytest.fixture(scope='session', autouse=True) > > def my_setup_logging(log_setup): pass > > > > @pytest.fixture(scope='session', autouse=True) > > def my_db_setup(my_setup_logging, db_setup): pass > > To me this looks like it still depends on the order of fixtures being > used. the end now depends on two chains: my_setup_logging -> log_setup > and db_setup (single-item-chain). But there's nothing, other then the > rule under discussion, which prioritises one chain over the other. > Actually because of the fact that `my_db_setup` depends on `my_setup_logging`, this guarantees that `my_setup_logging` will be instantiated fist. Because the fixtures are autouse, any item will depend on the fixtures in either of the following orders: 1) `my_setup_logging`, and then `my_db_setup`: if this is the case, then that's the order we ultimately want; 2) `my_db_setup` and then `my_setup_logging`: here in order to instantiate `my_db_setup`, we *need* to instantiate `my_setup_logging` first, thus guaranteeing the order we want. As we are discussing, the code above works with current pytest, but is more bailerplate than the code that is possible with the PR we are discussing: @pytest.fixture(scope='session', autouse=True) def my_setup(log_setup, db_setup): pass Mind you, I'm by now perfectly fine with the rule. I'm only discussing > it to understand the semantics of the world without that rule as just a > curious exercise. > Sure thing! It is interesting as well, and somewhat surprising that this discussion never came up before. :) Cheers, Bruno. -------------- next part -------------- An HTML attachment was scrubbed... URL: From flub at devork.be Tue Mar 20 10:07:20 2018 From: flub at devork.be (Floris Bruynooghe) Date: Tue, 20 Mar 2018 15:07:20 +0100 Subject: [pytest-dev] mark fix-up major milestone, please review In-Reply-To: References: Message-ID: On Tue, Mar 20 2018, Ronny Pfannschmidt wrote: > 2018-03-20 9:18 GMT+01:00 Floris Bruynooghe : >> On Mon, Mar 19 2018, Ronny Pfannschmidt wrote: >> > I also did deprecate markinfo attributes, >> > so everyone using them will get deprecation warnings. >> >> That's a lot of people probably. How long are we giving users for this? >> ? >> > > ?the support for accessing them can be kept for quite a while, > everyone using them will be told to use the new api > i would like to remove it at the beginning of 2019 Ok, they're currently marked as "deprecated in 4.0". So we're not releasing 4.0 until then? Or are we just keeping our options open here and extending a deprecation is easier then reducing it? >> > as far as i'm concerned, markinfo attributes where never ever a correct >> way >> > to handle distinct marks and quite a few bugs in pytest came from using >> > combined marks in markinfo instead of distinct marks. >> >> A short summary of these issues would help loads :) >> > ?basically MarkInfo objects are subject to marker transfer bugs, marker > smearing and everything wrong with marks these days > they are one of holgers infamous minimal changes that basically ensured we > never ever had correctly working marks > from the time we had support for more than one marker of the same name, and > it got worse by the addition of marker transfers > (for context - pytest marks started as thing to update test function > __dict__ with its keyword arguments, > they never got de-tangled from that messy legacy, and that was a major > source of really bad bugs) Thanks! Could you give an example of a marker transfer bug? I've never run into those myself so I'm not sure I understand what this is. I assume something like this will trigger weird cases: @pytest.mark.mark0 class Foo: @pytest.mark.mark1 def test_foo(self): pass @pytest.mark.mark0('foo', a='a') def test_bar(self): pass So the new API to inspect this is (in some pseudo-code like fashion): .get_marker('mark0') -> Mark(name='mark0') .get_marker('mark1') -> Mark(name='mark1') .get_marker('mark0') -> ?? list( [Mark(name='mark0')] list( ?? Also, can I iterate over all the markers on a node? Thanks for educating me! Floris From rpfannsc at redhat.com Tue Mar 20 10:32:36 2018 From: rpfannsc at redhat.com (Ronny Pfannschmidt) Date: Tue, 20 Mar 2018 15:32:36 +0100 Subject: [pytest-dev] mark fix-up major milestone, please review In-Reply-To: References: Message-ID: 2018-03-20 15:07 GMT+01:00 Floris Bruynooghe : > On Tue, Mar 20 2018, Ronny Pfannschmidt wrote: > > > 2018-03-20 9:18 GMT+01:00 Floris Bruynooghe : > >> On Mon, Mar 19 2018, Ronny Pfannschmidt wrote: > >> > I also did deprecate markinfo attributes, > >> > so everyone using them will get deprecation warnings. > >> > >> That's a lot of people probably. How long are we giving users for this? > >> ? > >> > > > > ?the support for accessing them can be kept for quite a while, > > everyone using them will be told to use the new api > > i would like to remove it at the beginning of 2019 > > Ok, they're currently marked as "deprecated in 4.0". So we're not > releasing 4.0 until then? Or are we just keeping our options open here > and extending a deprecation is easier then reducing it? > ?exending is easier than reducing. - but im not opposed to killing it earlier? > > >> > as far as i'm concerned, markinfo attributes where never ever a > correct > >> way > >> > to handle distinct marks and quite a few bugs in pytest came from > using > >> > combined marks in markinfo instead of distinct marks. > >> > >> A short summary of these issues would help loads :) > >> > > ?basically MarkInfo objects are subject to marker transfer bugs, marker > > smearing and everything wrong with marks these days > > they are one of holgers infamous minimal changes that basically ensured > we > > never ever had correctly working marks > > from the time we had support for more than one marker of the same name, > and > > it got worse by the addition of marker transfers > > (for context - pytest marks started as thing to update test function > > __dict__ with its keyword arguments, > > they never got de-tangled from that messy legacy, and that was a major > > source of really bad bugs) > > Thanks! Could you give an example of a marker transfer bug? I've never > run into those myself so I'm not sure I understand what this is. I > assume something like this will trigger weird cases: > > @pytest.mark.mark0 > class Foo: > > @pytest.mark.mark1 > def test_foo(self): > pass > > @pytest.mark.mark0('foo', a='a') > def test_bar(self): > pass > > So the new API to inspect this is (in some pseudo-code like fashion): > > .get_marker('mark0') -> Mark(name='mark0') > .get_marker('mark1') -> Mark(name='mark1') > .get_marker('mark0') -> ?? > list( [Mark(name='mark0')] > list( ?? > > ?class TestA(object): def test_fun(self): pass @pytest.mark.evil class TestB(TestA): pass -> after collection TestA.test_fun will have a evil marker? > Also, can I iterate over all the markers on a node? > > > ?for now, explicitly not, i'm open to introducing it when someone demonstrates a real practical use-case? -- ? Ronny? > Thanks for educating me! > Floris > -- Red Hat GmbH, http://www.de.redhat.com/, Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Michael Cunningham, Michael O'Neill, Eric Shander -------------- next part -------------- An HTML attachment was scrubbed... URL: From florian.schulze at gmx.net Tue Mar 20 11:03:34 2018 From: florian.schulze at gmx.net (Florian Schulze) Date: Tue, 20 Mar 2018 16:03:34 +0100 Subject: [pytest-dev] mark fix-up major milestone, please review In-Reply-To: References: Message-ID: On 20 Mar 2018, at 15:07, Floris Bruynooghe wrote: > On Tue, Mar 20 2018, Ronny Pfannschmidt wrote: > >> 2018-03-20 9:18 GMT+01:00 Floris Bruynooghe : >>> On Mon, Mar 19 2018, Ronny Pfannschmidt wrote: >>>> I also did deprecate markinfo attributes, >>>> so everyone using them will get deprecation warnings. >>> >>> That's a lot of people probably. How long are we giving users for >>> this? >>> ? >>> >> >> ?the support for accessing them can be kept for quite a while, >> everyone using them will be told to use the new api >> i would like to remove it at the beginning of 2019 > > Ok, they're currently marked as "deprecated in 4.0". So we're not > releasing 4.0 until then? Or are we just keeping our options open > here > and extending a deprecation is easier then reducing it? I think the marker PR already qualifies for a 4.0, so I would extend the deprecations. > Thanks! Could you give an example of a marker transfer bug? I've > never > run into those myself so I'm not sure I understand what this is. In devpi-server we have test classes that inherit from another test class. The derived one overwrites one fixture (for example testing through nginx or a devpi replica instead of directly against devpi-server). Because some of these fixtures are expensive, I want to mark the inherited class as "slow". Currently that mark is transferred to the base class and there is no other way to mark only the test functions on the derived class as slow. So currently the base class is also marked as slow. The PR fixes that. Regards, Florian Schulze From rpfannsc at redhat.com Tue Mar 20 11:11:46 2018 From: rpfannsc at redhat.com (Ronny Pfannschmidt) Date: Tue, 20 Mar 2018 16:11:46 +0100 Subject: [pytest-dev] mark fix-up major milestone, please review In-Reply-To: References: Message-ID: just a quick note - the markers pr as far as i understood it does not qualify for a 4.0 since the basci apis are bckward compatible and work as expected, if we would make it 4.0 worthy, then by dropping the old cruft -- Ronny 2018-03-20 16:03 GMT+01:00 Florian Schulze : > On 20 Mar 2018, at 15:07, Floris Bruynooghe wrote: > > On Tue, Mar 20 2018, Ronny Pfannschmidt wrote: >> >> 2018-03-20 9:18 GMT+01:00 Floris Bruynooghe : >>> >>>> On Mon, Mar 19 2018, Ronny Pfannschmidt wrote: >>>> >>>>> I also did deprecate markinfo attributes, >>>>> so everyone using them will get deprecation warnings. >>>>> >>>> >>>> That's a lot of people probably. How long are we giving users for this? >>>> ? >>>> >>>> >>> ?the support for accessing them can be kept for quite a while, >>> everyone using them will be told to use the new api >>> i would like to remove it at the beginning of 2019 >>> >> >> Ok, they're currently marked as "deprecated in 4.0". So we're not >> releasing 4.0 until then? Or are we just keeping our options open here >> and extending a deprecation is easier then reducing it? >> > > I think the marker PR already qualifies for a 4.0, so I would extend the > deprecations. > > > Thanks! Could you give an example of a marker transfer bug? I've never >> run into those myself so I'm not sure I understand what this is. >> > > In devpi-server we have test classes that inherit from another test class. > The derived one overwrites one fixture (for example testing through nginx > or a devpi replica instead of directly against devpi-server). Because some > of these fixtures are expensive, I want to mark the inherited class as > "slow". Currently that mark is transferred to the base class and there is > no other way to mark only the test functions on the derived class as slow. > So currently the base class is also marked as slow. The PR fixes that. > > Regards, > Florian Schulze > -- Red Hat GmbH, http://www.de.redhat.com/, Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Michael Cunningham, Michael O'Neill, Eric Shander -------------- next part -------------- An HTML attachment was scrubbed... URL: From florian.schulze at gmx.net Tue Mar 20 14:33:07 2018 From: florian.schulze at gmx.net (Florian Schulze) Date: Tue, 20 Mar 2018 19:33:07 +0100 Subject: [pytest-dev] mark fix-up major milestone, please review In-Reply-To: References: Message-ID: <29E90FF2-9A00-4F4A-A04F-A8E62C53A4A6@gmx.net> On 20 Mar 2018, at 16:11, Ronny Pfannschmidt wrote: > just a quick note - the markers pr as far as i understood it does not > qualify for a 4.0 since the basci apis are bckward compatible and work > as > expected, if we would make it 4.0 worthy, then by dropping the old > cruft In my opinion the change in transfer behaviour makes it 4.0, because it *will* affect people. It changed things in devpi-server which necessitates (small) changes. For others it might have way more far reaching effects. Regards, Florian Schulze > 2018-03-20 16:03 GMT+01:00 Florian Schulze : > >> On 20 Mar 2018, at 15:07, Floris Bruynooghe wrote: >> >> On Tue, Mar 20 2018, Ronny Pfannschmidt wrote: >>> >>> 2018-03-20 9:18 GMT+01:00 Floris Bruynooghe : >>>> >>>>> On Mon, Mar 19 2018, Ronny Pfannschmidt wrote: >>>>> >>>>>> I also did deprecate markinfo attributes, >>>>>> so everyone using them will get deprecation warnings. >>>>>> >>>>> >>>>> That's a lot of people probably. How long are we giving users for >>>>> this? >>>>> ? >>>>> >>>>> >>>> ?the support for accessing them can be kept for quite a while, >>>> everyone using them will be told to use the new api >>>> i would like to remove it at the beginning of 2019 >>>> >>> >>> Ok, they're currently marked as "deprecated in 4.0". So we're not >>> releasing 4.0 until then? Or are we just keeping our options open >>> here >>> and extending a deprecation is easier then reducing it? >>> >> >> I think the marker PR already qualifies for a 4.0, so I would extend >> the >> deprecations. >> >> >> Thanks! Could you give an example of a marker transfer bug? I've >> never >>> run into those myself so I'm not sure I understand what this is. >>> >> >> In devpi-server we have test classes that inherit from another test >> class. >> The derived one overwrites one fixture (for example testing through >> nginx >> or a devpi replica instead of directly against devpi-server). Because >> some >> of these fixtures are expensive, I want to mark the inherited class >> as >> "slow". Currently that mark is transferred to the base class and >> there is >> no other way to mark only the test functions on the derived class as >> slow. >> So currently the base class is also marked as slow. The PR fixes >> that. >> >> Regards, >> Florian Schulze >> > > > > -- > > Red Hat GmbH, http://www.de.redhat.com/, Registered seat: Grasbrunn, > Commercial register: Amtsgericht Muenchen, HRB 153243, > Managing Directors: Charles Cachera, Michael Cunningham, Michael > O'Neill, Eric Shander From rpfannsc at redhat.com Tue Mar 20 15:28:37 2018 From: rpfannsc at redhat.com (Ronny Pfannschmidt) Date: Tue, 20 Mar 2018 20:28:37 +0100 Subject: [pytest-dev] mark fix-up major milestone, please review In-Reply-To: <29E90FF2-9A00-4F4A-A04F-A8E62C53A4A6@gmx.net> References: <29E90FF2-9A00-4F4A-A04F-A8E62C53A4A6@gmx.net> Message-ID: again - please show me those changes! -- Ronny 2018-03-20 19:33 GMT+01:00 Florian Schulze : > On 20 Mar 2018, at 16:11, Ronny Pfannschmidt wrote: > > just a quick note - the markers pr as far as i understood it does not >> qualify for a 4.0 since the basci apis are bckward compatible and work as >> expected, if we would make it 4.0 worthy, then by dropping the old cruft >> > > In my opinion the change in transfer behaviour makes it 4.0, because it > *will* affect people. It changed things in devpi-server which necessitates > (small) changes. For others it might have way more far reaching effects. > > Regards, > Florian Schulze > > > 2018-03-20 16:03 GMT+01:00 Florian Schulze : >> >> On 20 Mar 2018, at 15:07, Floris Bruynooghe wrote: >>> >>> On Tue, Mar 20 2018, Ronny Pfannschmidt wrote: >>> >>>> >>>> 2018-03-20 9:18 GMT+01:00 Floris Bruynooghe : >>>> >>>>> >>>>> On Mon, Mar 19 2018, Ronny Pfannschmidt wrote: >>>>>> >>>>>> I also did deprecate markinfo attributes, >>>>>>> so everyone using them will get deprecation warnings. >>>>>>> >>>>>>> >>>>>> That's a lot of people probably. How long are we giving users for >>>>>> this? >>>>>> ? >>>>>> >>>>>> >>>>>> ?the support for accessing them can be kept for quite a while, >>>>> everyone using them will be told to use the new api >>>>> i would like to remove it at the beginning of 2019 >>>>> >>>>> >>>> Ok, they're currently marked as "deprecated in 4.0". So we're not >>>> releasing 4.0 until then? Or are we just keeping our options open here >>>> and extending a deprecation is easier then reducing it? >>>> >>>> >>> I think the marker PR already qualifies for a 4.0, so I would extend the >>> deprecations. >>> >>> >>> Thanks! Could you give an example of a marker transfer bug? I've never >>> >>>> run into those myself so I'm not sure I understand what this is. >>>> >>>> >>> In devpi-server we have test classes that inherit from another test >>> class. >>> The derived one overwrites one fixture (for example testing through nginx >>> or a devpi replica instead of directly against devpi-server). Because >>> some >>> of these fixtures are expensive, I want to mark the inherited class as >>> "slow". Currently that mark is transferred to the base class and there is >>> no other way to mark only the test functions on the derived class as >>> slow. >>> So currently the base class is also marked as slow. The PR fixes that. >>> >>> Regards, >>> Florian Schulze >>> >>> >> >> >> -- >> >> Red Hat GmbH, http://www.de.redhat.com/, Registered seat: Grasbrunn, >> Commercial register: Amtsgericht Muenchen, HRB 153243, >> Managing Directors: Charles Cachera, Michael Cunningham, Michael >> O'Neill, Eric Shander >> > > > -- Red Hat GmbH, http://www.de.redhat.com/, Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Michael Cunningham, Michael O'Neill, Eric Shander -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicoddemus at gmail.com Tue Mar 20 19:27:57 2018 From: nicoddemus at gmail.com (Bruno Oliveira) Date: Tue, 20 Mar 2018 23:27:57 +0000 Subject: [pytest-dev] Fixture ordering and scopes In-Reply-To: References: Message-ID: Howdy! On Thu, Mar 15, 2018 at 6:42 PM Bruno Oliveira wrote: > I opened up a PR which > does sort parameters by scope while keeping the relative order of fixtures > of same scope intact, and the test suite passes without failures so if the > current behavior is by design there are not tests enforcing it. > I pushed docs and tidied the code a little bit, so I believe the branch is ready for another round of reviews/merging. Cheers, Bruno. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicoddemus at gmail.com Wed Mar 21 18:31:05 2018 From: nicoddemus at gmail.com (Bruno Oliveira) Date: Wed, 21 Mar 2018 22:31:05 +0000 Subject: [pytest-dev] License for pytest-dev/pytest-design In-Reply-To: References: Message-ID: I'm happy with whatever you guys decide FWIW. Cheers, Bruno. On Sat, Mar 17, 2018 at 7:03 AM Vasily Kuznetsov wrote: > I'm ok with MIT although I would think that some kind of CC would be more > appropriate in this case. I'm also not very sure about this though... > > As for contributors, Brianna also contributed quite a lot during the > sprint and "asserts before reverts" motto is her idea if I remember it > right. > > Cheers, > Vasily > > On Sat, Mar 17, 2018 at 12:02 AM Floris Bruynooghe wrote: > >> Bruno Oliveira writes: >> >> > Hi folks, >> > >> > pytest-design is the repository used to store logo and t-shirt designs >> from >> > our 2016 sprint, but it currently it isn't under any LICENSE. >> >> Whoops, that's pretty terrible! >> >> > I've been >> > asked if it was OK to use the logos to make a t-shirt (and of course I >> know >> > it is), but we should add a license to the repository allowing just >> that. >> >> Does it make sense to put it under MIT as well? I'm not sure I'd want >> the attribution part of Creative Commons license here as that's somewhat >> hard with stickers or t-shirts. >> >> >> It seems currently it's Vasily and me who committed to the repo, though >> Florian also contributed at the sprint IIRC. Which I guess means us >> three have to agree to whatever license we end up with. So I'll propose >> MIT as I don't know any better :-) >> >> Cheers, >> Floris >> > _______________________________________________ >> pytest-dev mailing list >> pytest-dev at python.org >> https://mail.python.org/mailman/listinfo/pytest-dev >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From 03sjbrown at gmail.com Wed Mar 21 22:39:09 2018 From: 03sjbrown at gmail.com (Shawn Brown) Date: Wed, 21 Mar 2018 22:39:09 -0400 Subject: [pytest-dev] Custom reporting for asserts without comparison operators? In-Reply-To: <6e42bc19-1539-8c22-c927-574df8a69e6c@ronnypfannschmidt.de> References: <20180319141306.GQ4712@beto> <6e42bc19-1539-8c22-c927-574df8a69e6c@ronnypfannschmidt.de> Message-ID: Ah. It's good to see that this has been thought about before. My motivation for asking this question was to perform my due diligence and make sure I wasn't missing something before moving ahead. My immediate need is handled by using assert_myfunc() to raise its own error internally--same as Floris' example. Though, it's not ideal. I know my examples have been vague as I've stripped the specifics of my project to focus the question specifically on pytest's behavior and I greatly appreciate everyone who is giving some thought to this. As Ronny mentioned, I'm sure it's possible to address this without user-facing AST manipulation. But I'm not familiar enough with the code base to see where I can best hack on the representations. However, I do have a working AST-based demonstration (below). This uses a fragile monkey-patch that is just asking for trouble so please take this for the experimental hack it is... FILE "conftest.py": import ast import _pytest def my_ast_prerewrite_hook(ast_assert): """Modifies AST of certain asserts before pytest-rewriting.""" # Demo AST-tree manipulation (actual implemenation # would need to be more careful than this). if (isinstance(ast_assert.test, ast.Call) and isinstance(ast_assert.test.func, ast.Name) and ast_assert.test.func.id == 'myfunc'): ast_assert.test.func = ast.Name('assert_myfunc', ast.Load()) return ast_assert # UNDESIRABLE MONKEY PATCHING!!! class ModifiedRewriter(_pytest.assertion.rewrite.AssertionRewriter): def visit_Assert(self, assert_): assert_ = my_ast_prerewrite_hook(assert_) # <- PRE-REWRITE HOOK return super(ModifiedRewriter, self).visit_Assert(assert_) def rewrite_asserts(mod, module_path=None, config=None): ModifiedRewriter(module_path, config).run(mod) _pytest.assertion.rewrite.rewrite_asserts = rewrite_asserts FILE "test_ast_hook_approach.py": import pytest # Test helpers. def myfunc(x): return x == 42 def assert_myfunc(x): __tracebackhide__ = True if not myfunc(x): msg = 'custom report\nmulti-line output\nmyfunc({0}) failed' raise AssertionError(msg.format(x)) return True # Test cases. def test_1passing(): assert myfunc(42) def test_2passing(): assert myfunc(41) is False def test_3passing(): with pytest.raises(AssertionError) as excinfo: assert myfunc(41) assert 'custom report' in str(excinfo.value) def test_4failing(): assert myfunc(41) Running the above test gives 3 passing cases and 1 failing case (which uses the custom report). Also, test_2passing() checks for "is False" instead of just "== False" which I think would be wonderful to support as it removes all caveats for the user (so users get a real False when they expect False, instead of a Falsey alternative). Also, if I were going to use AST manipulation like this, I would probably reference assert_myfunc() by attaching it as a private attribute to myfunc() itself -- and then reference it with ast.Attribute() node instead of an ast.Name(). But again, solving this without AST manipulation could be better in many ways. --Shawn On Mon, Mar 19, 2018 at 1:59 PM, Ronny Pfannschmidt < ich at ronnypfannschmidt.de> wrote: > hi everyone, > > this is just about single value assertion helpers > > i logged an feature request about that a few year back > see https://github.com/pytest-dev/pytest/issues/95 - > > so basically this use-case was known since 2011 ^^ and doesn't require > ast rewriting lice macros, > just proper engineering of the representation and handling of single > values in the assertion rewriter. > > -- Ronny > > > Am 19.03.2018 um 15:13 schrieb holger krekel: > > On Mon, Mar 19, 2018 at 15:03 +0100, Floris Bruynooghe wrote: > >> On Sun, Mar 18 2018, Shawn Brown wrote: > >>> Unfortunately, this does not solve my usecase. I'm trying to handle > cases > >>> where the following statement would pass: > >>> > >>> assert myfunc(failing_input) == False > >>> > >>> But where this next statement would fail using my custom report: > >>> > >>> assert myfunc(failing_input) > >>> > >>> Calling myfunc() needs to return True or False (or at least Truthy or > >>> Falsy)--this is locked-in behavior. > >> I'm not sure if this is compatible with Python's semantics really. If I > >> understand correctly you're asking for a full-on macro implementation on > >> Python or something. Which in theory you could do with an AST > >> NodeVisitor, but really Python isn't made for this -- sounds like you'd > >> enjoy lisp! ;-) > >> > >> The best thing I can suggest is to make use of the:: > >> > >> assert myfunc(failing_input), repr(myfunc(failing_input())) > > i wonder if one could try to rewrite the ast for "assert myfunc(x)" to > > "assert __pytest_funcrepr_helper(myfunc(x), 'myfunc(x)')" with > something like: > > > > class __pytest_funcrepr_helper: > > def __init__(self, val, source): > > self.val = val > > self.source = source > > def __bool__(self): > > return bool(self.val) > > def __repr__(self): > > return "{!r} returned non-true {!r}".format(self.source, > self.val) > > > > but maybe i am not grasping all details involved. It's been a while since > > i looked into ast-rewriting ... > > > > holger > > > > > >> functionality to also get a custom error message. Here your myfunc() > >> whould have to return some object which both implements __bool__ as well > >> as __repr__ I guess. > >> > >> Maybe there's a feature request in here for something like this:: > >> > >> class Foo: > >> def __bool__(self): > >> return False > >> > >> def __repr__(self): > >> return 'multiline\nstring' > >> > >> assert Foo() > >> > >> To actually show the repr in the error message, which it currently > >> doesn't. I'd like to know what other people think of such a feature > >> though, and haven't thought through all the implications yet. But I'm > >> curious, would something like that solve your case? > >> > >> Cheers, > >> Floris > >> _______________________________________________ > >> pytest-dev mailing list > >> pytest-dev at python.org > >> https://mail.python.org/mailman/listinfo/pytest-dev > > _______________________________________________ > > pytest-dev mailing list > > pytest-dev at python.org > > https://mail.python.org/mailman/listinfo/pytest-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpfannsc at redhat.com Thu Mar 22 04:12:56 2018 From: rpfannsc at redhat.com (Ronny Pfannschmidt) Date: Thu, 22 Mar 2018 09:12:56 +0100 Subject: [pytest-dev] Custom reporting for asserts without comparison operators? In-Reply-To: References: <20180319141306.GQ4712@beto> <6e42bc19-1539-8c22-c927-574df8a69e6c@ronnypfannschmidt.de> Message-ID: that approach is broken in the sense, that it breaks behaviour expectations, an return value helper, that triggers an assertion on its own is simply no longer a return value helper, but a assertion helper supporting it like that would result in a really bad api instead having assertion helper that returns a "truthy" object which can be introspected by pytest and/or negated should be more suitable 2018-03-22 3:39 GMT+01:00 Shawn Brown <03sjbrown at gmail.com>: > Ah. It's good to see that this has been thought about before. > > My motivation for asking this question was to perform my due diligence and > make sure I wasn't missing something before moving ahead. My immediate > need is handled by using assert_myfunc() to raise its own error > internally--same as Floris' example. Though, it's not ideal. > > I know my examples have been vague as I've stripped the specifics of my > project to focus the question specifically on pytest's behavior and I > greatly appreciate everyone who is giving some thought to this. > > As Ronny mentioned, I'm sure it's possible to address this without > user-facing AST manipulation. But I'm not familiar enough with the code > base to see where I can best hack on the representations. However, I do > have a working AST-based demonstration (below). This uses a fragile > monkey-patch that is just asking for trouble so please take this for the > experimental hack it is... > > > FILE "conftest.py": > > import ast > import _pytest > > def my_ast_prerewrite_hook(ast_assert): > """Modifies AST of certain asserts before pytest-rewriting.""" > # Demo AST-tree manipulation (actual implemenation > # would need to be more careful than this). > if (isinstance(ast_assert.test, ast.Call) > and isinstance(ast_assert.test.func, ast.Name) > and ast_assert.test.func.id == 'myfunc'): > > ast_assert.test.func = ast.Name('assert_myfunc', ast.Load()) > > return ast_assert > > # UNDESIRABLE MONKEY PATCHING!!! > class ModifiedRewriter(_pytest.assertion.rewrite.AssertionRewriter): > def visit_Assert(self, assert_): > assert_ = my_ast_prerewrite_hook(assert_) # <- PRE-REWRITE > HOOK > return super(ModifiedRewriter, self).visit_Assert(assert_) > > def rewrite_asserts(mod, module_path=None, config=None): > ModifiedRewriter(module_path, config).run(mod) > > _pytest.assertion.rewrite.rewrite_asserts = rewrite_asserts > > > FILE "test_ast_hook_approach.py": > > import pytest > > # Test helpers. > def myfunc(x): > return x == 42 > > def assert_myfunc(x): > __tracebackhide__ = True > if not myfunc(x): > msg = 'custom report\nmulti-line output\nmyfunc({0}) failed' > raise AssertionError(msg.format(x)) > return True > > # Test cases. > def test_1passing(): > assert myfunc(42) > > def test_2passing(): > assert myfunc(41) is False > > def test_3passing(): > with pytest.raises(AssertionError) as excinfo: > assert myfunc(41) > assert 'custom report' in str(excinfo.value) > > def test_4failing(): > assert myfunc(41) > > > Running the above test gives 3 passing cases and 1 failing case (which > uses the custom report). Also, test_2passing() checks for "is False" > instead of just "== False" which I think would be wonderful to support as > it removes all caveats for the user (so users get a real False when they > expect False, instead of a Falsey alternative). Also, if I were going to > use AST manipulation like this, I would probably reference assert_myfunc() > by attaching it as a private attribute to myfunc() itself -- and then > reference it with ast.Attribute() node instead of an ast.Name(). But again, > solving this without AST manipulation could be better in many ways. > > --Shawn > > > On Mon, Mar 19, 2018 at 1:59 PM, Ronny Pfannschmidt < > ich at ronnypfannschmidt.de> wrote: > >> hi everyone, >> >> this is just about single value assertion helpers >> >> i logged an feature request about that a few year back >> see https://github.com/pytest-dev/pytest/issues/95 - >> >> so basically this use-case was known since 2011 ^^ and doesn't require >> ast rewriting lice macros, >> just proper engineering of the representation and handling of single >> values in the assertion rewriter. >> >> -- Ronny >> >> >> Am 19.03.2018 um 15:13 schrieb holger krekel: >> > On Mon, Mar 19, 2018 at 15:03 +0100, Floris Bruynooghe wrote: >> >> On Sun, Mar 18 2018, Shawn Brown wrote: >> >>> Unfortunately, this does not solve my usecase. I'm trying to handle >> cases >> >>> where the following statement would pass: >> >>> >> >>> assert myfunc(failing_input) == False >> >>> >> >>> But where this next statement would fail using my custom report: >> >>> >> >>> assert myfunc(failing_input) >> >>> >> >>> Calling myfunc() needs to return True or False (or at least Truthy or >> >>> Falsy)--this is locked-in behavior. >> >> I'm not sure if this is compatible with Python's semantics really. If >> I >> >> understand correctly you're asking for a full-on macro implementation >> on >> >> Python or something. Which in theory you could do with an AST >> >> NodeVisitor, but really Python isn't made for this -- sounds like you'd >> >> enjoy lisp! ;-) >> >> >> >> The best thing I can suggest is to make use of the:: >> >> >> >> assert myfunc(failing_input), repr(myfunc(failing_input())) >> > i wonder if one could try to rewrite the ast for "assert myfunc(x)" to >> > "assert __pytest_funcrepr_helper(myfunc(x), 'myfunc(x)')" with >> something like: >> > >> > class __pytest_funcrepr_helper: >> > def __init__(self, val, source): >> > self.val = val >> > self.source = source >> > def __bool__(self): >> > return bool(self.val) >> > def __repr__(self): >> > return "{!r} returned non-true {!r}".format(self.source, >> self.val) >> > >> > but maybe i am not grasping all details involved. It's been a while >> since >> > i looked into ast-rewriting ... >> > >> > holger >> > >> > >> >> functionality to also get a custom error message. Here your myfunc() >> >> whould have to return some object which both implements __bool__ as >> well >> >> as __repr__ I guess. >> >> >> >> Maybe there's a feature request in here for something like this:: >> >> >> >> class Foo: >> >> def __bool__(self): >> >> return False >> >> >> >> def __repr__(self): >> >> return 'multiline\nstring' >> >> >> >> assert Foo() >> >> >> >> To actually show the repr in the error message, which it currently >> >> doesn't. I'd like to know what other people think of such a feature >> >> though, and haven't thought through all the implications yet. But I'm >> >> curious, would something like that solve your case? >> >> >> >> Cheers, >> >> Floris >> >> _______________________________________________ >> >> pytest-dev mailing list >> >> pytest-dev at python.org >> >> https://mail.python.org/mailman/listinfo/pytest-dev >> > _______________________________________________ >> > pytest-dev mailing list >> > pytest-dev at python.org >> > https://mail.python.org/mailman/listinfo/pytest-dev >> > > _______________________________________________ > pytest-dev mailing list > pytest-dev at python.org > https://mail.python.org/mailman/listinfo/pytest-dev > > -- Red Hat GmbH, http://www.de.redhat.com/, Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Michael Cunningham, Michael O'Neill, Eric Shander -------------- next part -------------- An HTML attachment was scrubbed... URL: From 03sjbrown at gmail.com Thu Mar 22 11:35:05 2018 From: 03sjbrown at gmail.com (Shawn Brown) Date: Thu, 22 Mar 2018 11:35:05 -0400 Subject: [pytest-dev] Custom reporting for asserts without comparison operators? In-Reply-To: References: <20180319141306.GQ4712@beto> <6e42bc19-1539-8c22-c927-574df8a69e6c@ronnypfannschmidt.de> Message-ID: Would a return value helper--as you are thinking about it--be able to handle cases like test_3passing()? def test_3passing(): with pytest.raises(AssertionError) as excinfo: assert myfunc(41) assert 'custom report' in str(excinfo.value) I've looked around the code base but I'm not sure where best to hack on things. I've played with changes to _pytest.assertion.rewrite._saferepr() but this doesn't seem to be the right place to address this sort of change (newline handling and other details are handled elsewhere). On Thu, Mar 22, 2018 at 4:12 AM, Ronny Pfannschmidt wrote: > that approach is broken in the sense, that it breaks behaviour > expectations, > an return value helper, that triggers an assertion on its own is simply no > longer a return value helper, but a assertion helper > > supporting it like that would result in a really bad api > > instead having assertion helper that returns a "truthy" object which can > be introspected by pytest and/or negated should be more suitable > > 2018-03-22 3:39 GMT+01:00 Shawn Brown <03sjbrown at gmail.com>: > >> Ah. It's good to see that this has been thought about before. >> >> My motivation for asking this question was to perform my due diligence >> and make sure I wasn't missing something before moving ahead. My >> immediate need is handled by using assert_myfunc() to raise its own error >> internally--same as Floris' example. Though, it's not ideal. >> >> I know my examples have been vague as I've stripped the specifics of my >> project to focus the question specifically on pytest's behavior and I >> greatly appreciate everyone who is giving some thought to this. >> >> As Ronny mentioned, I'm sure it's possible to address this without >> user-facing AST manipulation. But I'm not familiar enough with the code >> base to see where I can best hack on the representations. However, I do >> have a working AST-based demonstration (below). This uses a fragile >> monkey-patch that is just asking for trouble so please take this for the >> experimental hack it is... >> >> >> FILE "conftest.py": >> >> import ast >> import _pytest >> >> def my_ast_prerewrite_hook(ast_assert): >> """Modifies AST of certain asserts before pytest-rewriting.""" >> # Demo AST-tree manipulation (actual implemenation >> # would need to be more careful than this). >> if (isinstance(ast_assert.test, ast.Call) >> and isinstance(ast_assert.test.func, ast.Name) >> and ast_assert.test.func.id == 'myfunc'): >> >> ast_assert.test.func = ast.Name('assert_myfunc', ast.Load()) >> >> return ast_assert >> >> # UNDESIRABLE MONKEY PATCHING!!! >> class ModifiedRewriter(_pytest.assertion.rewrite.AssertionRewriter): >> def visit_Assert(self, assert_): >> assert_ = my_ast_prerewrite_hook(assert_) # <- PRE-REWRITE >> HOOK >> return super(ModifiedRewriter, self).visit_Assert(assert_) >> >> def rewrite_asserts(mod, module_path=None, config=None): >> ModifiedRewriter(module_path, config).run(mod) >> >> _pytest.assertion.rewrite.rewrite_asserts = rewrite_asserts >> >> >> FILE "test_ast_hook_approach.py": >> >> import pytest >> >> # Test helpers. >> def myfunc(x): >> return x == 42 >> >> def assert_myfunc(x): >> __tracebackhide__ = True >> if not myfunc(x): >> msg = 'custom report\nmulti-line output\nmyfunc({0}) failed' >> raise AssertionError(msg.format(x)) >> return True >> >> # Test cases. >> def test_1passing(): >> assert myfunc(42) >> >> def test_2passing(): >> assert myfunc(41) is False >> >> def test_3passing(): >> with pytest.raises(AssertionError) as excinfo: >> assert myfunc(41) >> assert 'custom report' in str(excinfo.value) >> >> def test_4failing(): >> assert myfunc(41) >> >> >> Running the above test gives 3 passing cases and 1 failing case (which >> uses the custom report). Also, test_2passing() checks for "is False" >> instead of just "== False" which I think would be wonderful to support as >> it removes all caveats for the user (so users get a real False when they >> expect False, instead of a Falsey alternative). Also, if I were going to >> use AST manipulation like this, I would probably reference assert_myfunc() >> by attaching it as a private attribute to myfunc() itself -- and then >> reference it with ast.Attribute() node instead of an ast.Name(). But again, >> solving this without AST manipulation could be better in many ways. >> >> --Shawn >> >> >> On Mon, Mar 19, 2018 at 1:59 PM, Ronny Pfannschmidt < >> ich at ronnypfannschmidt.de> wrote: >> >>> hi everyone, >>> >>> this is just about single value assertion helpers >>> >>> i logged an feature request about that a few year back >>> see https://github.com/pytest-dev/pytest/issues/95 - >>> >>> so basically this use-case was known since 2011 ^^ and doesn't require >>> ast rewriting lice macros, >>> just proper engineering of the representation and handling of single >>> values in the assertion rewriter. >>> >>> -- Ronny >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From flub at devork.be Thu Mar 22 17:46:10 2018 From: flub at devork.be (Floris Bruynooghe) Date: Thu, 22 Mar 2018 22:46:10 +0100 Subject: [pytest-dev] Custom reporting for asserts without comparison operators? In-Reply-To: References: <20180319141306.GQ4712@beto> <6e42bc19-1539-8c22-c927-574df8a69e6c@ronnypfannschmidt.de> Message-ID: On Thu, Mar 22 2018, Shawn Brown wrote: > Would a return value helper--as you are thinking about it--be able to > handle cases like test_3passing()? > > def test_3passing(): > with pytest.raises(AssertionError) as excinfo: > assert myfunc(41) > assert 'custom report' in str(excinfo.value) I agree with Ronny here and do think you're trying to design a very weird and unnatural API for Python. The function should either return an object or raise an exception, you can't have both. Well, you can as you've show, but this is very brittle and your users will be thoroughly confused about what is going on. So you shouldn't have both. Having followed this discussion so far I still think the most sane approach is to let pytest show the repr of failing objects [0]. This would allow you to write objects with a .__bool__() and a .__repr__() where pytest would show the repr if it's false. And if you really want to have the dual behaviour, make it explicit to the user with a signature like myfunc(value, raising=False) so they can invoke the raising behaviour when desired. [0] I don't think it currently does this, so this would be a feature request AFAIK. From opensource at ronnypfannschmidt.de Thu Mar 22 18:45:35 2018 From: opensource at ronnypfannschmidt.de (RonnyPfannschmidt) Date: Thu, 22 Mar 2018 23:45:35 +0100 Subject: [pytest-dev] Custom reporting for asserts without comparison operators? In-Reply-To: References: <20180319141306.GQ4712@beto> <6e42bc19-1539-8c22-c927-574df8a69e6c@ronnypfannschmidt.de> Message-ID: <50d6e237-1a16-feb2-b6da-a873f6316667@ronnypfannschmidt.de> i don't see under what criteria that test is sensible in terms of integration, if the helper does the assert, you do not need a assert statement if the helper does not do the assert, the python assertion mechanism on its own is unable to provide the meta-data as such that test is really only able to test a non-integrated solution and i don't consider it to be of value when talking about integration -- Ronny Am 22.03.2018 um 16:35 schrieb Shawn Brown: > Would a return value helper--as you are thinking about it--be able to > handle cases like test_3passing()? > > ??? def test_3passing(): > ??????? with pytest.raises(AssertionError) as excinfo: > ??????????? assert myfunc(41) > ??????? assert 'custom report' in str(excinfo.value) > > I've looked around the code base but I'm not sure where best to hack > on things. I've played with changes to > _pytest.assertion.rewrite._saferepr() but this doesn't seem to be the > right place to address this sort of change (newline handling and other > details are handled elsewhere). > > > On Thu, Mar 22, 2018 at 4:12 AM, Ronny Pfannschmidt > > wrote: > > that approach is broken in the sense, that it breaks behaviour > expectations, > an return value helper, that triggers an assertion on its own is > simply no longer a return value helper, but a assertion helper > > supporting it like that would result in a really bad api > > instead having? assertion helper that returns a "truthy" object > which can be introspected by pytest and/or negated should be more > suitable > > 2018-03-22 3:39 GMT+01:00 Shawn Brown <03sjbrown at gmail.com > >: > > Ah. It's good to see that this has been thought about before.? > > My motivation for asking this question was to perform my due > diligence and make sure I wasn't missing something before > moving ahead.?My immediate need is handled by using > assert_myfunc() to raise its own error internally--same as > Floris' example. Though, it's not ideal. > > I know my examples have been vague as I've stripped the > specifics of my project to focus the question specifically on > pytest's behavior and I greatly appreciate everyone who is > giving some thought to this. > > As Ronny mentioned, I'm sure it's possible to address this > without user-facing AST manipulation. But I'm not familiar > enough with the code base to see where I can best hack on the > representations. However, I do have a working AST-based > demonstration (below). This uses a fragile monkey-patch that > is just asking for trouble so please take this for the > experimental hack it is... > > > FILE "conftest.py": > > ??? import ast > ??? import _pytest > > ??? def my_ast_prerewrite_hook(ast_assert): > ??????? """Modifies AST of certain asserts before > pytest-rewriting.""" > ??????? # Demo AST-tree manipulation (actual implemenation > ??????? # would need to be more careful than this). > ??????? if (isinstance(ast_assert.test, ast.Call) > ??????????????? and isinstance(ast_assert.test.func, ast.Name) > ??????????????? and ast_assert.test.func.id > == 'myfunc'): > > ??????????? ast_assert.test.func = ast.Name('assert_myfunc', > ast.Load()) > > ??????? return ast_assert > > ??? # UNDESIRABLE MONKEY PATCHING!!! > ??? class > ModifiedRewriter(_pytest.assertion.rewrite.AssertionRewriter): > ??????? def visit_Assert(self, assert_): > ??????????? assert_ = my_ast_prerewrite_hook(assert_)? # <- > PRE-REWRITE HOOK > ??????????? return super(ModifiedRewriter, > self).visit_Assert(assert_) > > ??? def rewrite_asserts(mod, module_path=None, config=None): > ??????? ModifiedRewriter(module_path, config).run(mod) > > ??? _pytest.assertion.rewrite.rewrite_asserts = rewrite_asserts > > > FILE "test_ast_hook_approach.py": > > ??? import pytest > > ??? # Test helpers. > ??? def myfunc(x): > ??????? return x == 42 > > ??? def assert_myfunc(x): > ??????? __tracebackhide__ = True > ??????? if not myfunc(x): > ??????????? msg = 'custom report\nmulti-line > output\nmyfunc({0}) failed' > ??????????? raise AssertionError(msg.format(x)) > ??????? return True > > ??? # Test cases. > ??? def test_1passing(): > ??????? assert myfunc(42) > > ??? def test_2passing(): > ??????? assert myfunc(41) is False > > ??? def test_3passing(): > ??????? with pytest.raises(AssertionError) as excinfo: > ??????????? assert myfunc(41) > ??????? assert 'custom report' in str(excinfo.value) > > ??? def test_4failing(): > ??????? assert myfunc(41) > > > Running the above test gives 3 passing cases and 1 failing > case (which uses the custom report). Also, test_2passing() > checks for "is False" instead of just "== False" which I think > would be wonderful to support as it removes all caveats for > the user (so users get a real False when they expect False, > instead of a Falsey alternative). Also, if I were going to use > AST manipulation like this, I would probably reference > assert_myfunc() by attaching it as a private attribute to > myfunc() itself -- and then reference it with ast.Attribute() > node instead of an ast.Name(). But again, solving this without > AST manipulation could be better in many ways. > > --Shawn > > > On Mon, Mar 19, 2018 at 1:59 PM, Ronny Pfannschmidt > > > wrote: > > hi everyone, > > this is just about single value assertion helpers > > i logged an feature request about that a few year back > see https://github.com/pytest-dev/pytest/issues/95 > - > > so basically this use-case was known since 2011 ^^ and > doesn't require > ast rewriting lice macros, > just proper engineering of the representation and handling > of single > values in the assertion rewriter. > > -- Ronny > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicoddemus at gmail.com Thu Mar 22 19:55:36 2018 From: nicoddemus at gmail.com (Bruno Oliveira) Date: Thu, 22 Mar 2018 23:55:36 +0000 Subject: [pytest-dev] pytest 3.5.0 released! Message-ID: The pytest team is proud to announce the 3.5.0 release! This release contains a large number of new command-line options, a lot of them made by new contributors which makes this all the more reason for celebration, make sure to take a look at the CHANGELOG to see what's new: http://doc.pytest.org/en/latest/changelog.html For complete documentation, please visit: http://docs.pytest.org As usual, you can upgrade from pypi via: pip install -U pytest Thanks to all who contributed to this release, among them: * Allan Feldman * Brian Maissy * Bruno Oliveira * Carlos Jenkins * Daniel Hahler * Florian Bruhin * Jason R. Coombs * Jeffrey Rackauckas * Jordan Speicher * Julien Palard * Kale Kundert * Kostis Anagnostopoulos * Kyle Altendorf * Maik Figura * Pedro Algarvio * Ronny Pfannschmidt * Tadeu Manoel * Tareq Alayan * Thomas Hisch * William Lee * codetriage-readme-bot * feuillemorte * joshm91 * mike Happy testing, The Pytest Development Team -------------- next part -------------- An HTML attachment was scrubbed... URL: From 03sjbrown at gmail.com Fri Mar 23 02:15:36 2018 From: 03sjbrown at gmail.com (Shawn Brown) Date: Fri, 23 Mar 2018 02:15:36 -0400 Subject: [pytest-dev] Custom reporting for asserts without comparison operators? In-Reply-To: References: <20180319141306.GQ4712@beto> <6e42bc19-1539-8c22-c927-574df8a69e6c@ronnypfannschmidt.de> Message-ID: > I still think the most sane approach is to let pytest show > the repr of failing objects [0]. This would allow you to > write objects with a .__bool__() and a .__repr__() where > pytest would show the repr if it's false. Don't get me wrong--I fully agree that the approach you're describing. And I'm not advocating for that AST example at all (it's just all I could cobble together because I didn't see an appropriate place to hack on the repr handling). > And if you really want to have the dual behaviour, make > it explicit... The dual behavior was an implementation detail of the hacked experiment I was playing with. My interest was in having something like pytest_assertrepr_compare() but for single-values rather than just comparisons. > [0] I don't think it currently does this, so this would > be a feature request AFAIK. Here's an example of Pytest's current behavior using a falsey value with a repr: import pytest # Test helpers. def myfunc(x): if x == 42: return True msg = 'custom report\nmulti-line output\nmyfunc({0}) failed' return FalsyValue(msg.format(x)) class FalsyValue(object): def __init__(self, repr_string): self.repr_string = repr_string def __bool__(self): return False def __nonzero__(self): # <- for py 2 return False def __eq__(self, other): return other == False def __repr__(self): return self.repr_string # Test cases. def test_passing(): assert myfunc(42) def test_failing(): assert myfunc(41) Running this gives the following failure message: ============================ FAILURES ============================ __________________________ test_failing __________________________ def test_failing(): > assert myfunc(41) E assert custom report\nmulti-line output\nmyfunc(41) failed E + where custom report\nmulti-line output\nmyfunc(41) fai led = myfunc(41) test_falsey_object.py:31: AssertionError =============== 1 failed, 1 passed in 0.22 seconds =============== So this does sort-of work although the repr is duplicated and newlines are being escaped. On thing that's important to mention: If there's a feature request based on anything in this discussion thread, it should not be made for my case. After giving it more thought, I think I'll need to raise my own errors directly so I'm not sure I would be able to use the feature. On Thu, Mar 22, 2018 at 5:46 PM, Floris Bruynooghe wrote: > On Thu, Mar 22 2018, Shawn Brown wrote: > > > Would a return value helper--as you are thinking about it--be able to > > handle cases like test_3passing()? > > > > def test_3passing(): > > with pytest.raises(AssertionError) as excinfo: > > assert myfunc(41) > > assert 'custom report' in str(excinfo.value) > > I agree with Ronny here and do think you're trying to design a very > weird and unnatural API for Python. The function should either return > an object or raise an exception, you can't have both. Well, you can as > you've show, but this is very brittle and your users will be thoroughly > confused about what is going on. So you shouldn't have both. > > Having followed this discussion so far I still think the most sane > approach is to let pytest show the repr of failing objects [0]. This would > allow you to write objects with a .__bool__() and a .__repr__() where > pytest would show the repr if it's false. And if you really want to > have the dual behaviour, make it explicit to the user with a signature > like myfunc(value, raising=False) so they can invoke the raising > behaviour when desired. > > > [0] I don't think it currently does this, so this would be a feature > request AFAIK. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From flub at devork.be Mon Mar 26 16:30:10 2018 From: flub at devork.be (Floris Bruynooghe) Date: Mon, 26 Mar 2018 22:30:10 +0200 Subject: [pytest-dev] Custom reporting for asserts without comparison operators? In-Reply-To: References: <20180319141306.GQ4712@beto> <6e42bc19-1539-8c22-c927-574df8a69e6c@ronnypfannschmidt.de> Message-ID: On Fri, Mar 23 2018, Shawn Brown wrote: > The dual behavior was an implementation detail of the hacked experiment I > was playing with. My interest was in having something like > pytest_assertrepr_compare() but for single-values rather than just > comparisons. Cool, I think the equivalent of the assertrepr hook is the FalsyValue you have below. But as you noticed it needs some work. >> [0] I don't think it currently does this, so this would >> be a feature request AFAIK. > > Here's an example of Pytest's current behavior using a falsey value with a > repr: > > import pytest > > # Test helpers. > def myfunc(x): > if x == 42: > return True > msg = 'custom report\nmulti-line output\nmyfunc({0}) failed' > return FalsyValue(msg.format(x)) > > class FalsyValue(object): > def __init__(self, repr_string): > self.repr_string = repr_string > > def __bool__(self): > return False > > def __nonzero__(self): # <- for py 2 > return False > > def __eq__(self, other): > return other == False > > def __repr__(self): > return self.repr_string > > # Test cases. > def test_passing(): > assert myfunc(42) > > def test_failing(): > assert myfunc(41) > > > Running this gives the following failure message: > > ============================ FAILURES ============================ > __________________________ test_failing __________________________ > > def test_failing(): > > assert myfunc(41) > E assert custom report\nmulti-line output\nmyfunc(41) failed > E + where custom report\nmulti-line output\nmyfunc(41) fai > led = myfunc(41) > > test_falsey_object.py:31: AssertionError > =============== 1 failed, 1 passed in 0.22 seconds =============== > > So this does sort-of work although the repr is duplicated and newlines are > being escaped. I think making this show correctly is worthwhile here. Now there are most likely testsuites which rely on the current "sanitisation" of the repr output, either by design or more likely by accident. So implementing probably has to be a little more careful, e.g. maybe introduce something like the __tracebackhide__ method. But then we should probably be more modern and have some class decorator instead of that old locals hack (we really should also provide a modern decorator-based API for __tracebackhide__ as well if someone feels like a simple-ish feature). Cheers, Floris From nicoddemus at gmail.com Tue Mar 27 07:02:10 2018 From: nicoddemus at gmail.com (Bruno Oliveira) Date: Tue, 27 Mar 2018 11:02:10 +0000 Subject: [pytest-dev] pytest-flask maintainer unreachable Message-ID: Hi everyone, People have expressed concern over pytest-flask because there has not been a release in over 2 years (see https://github.com/pytest-dev/pytest-flask/issues/72) and Vital Kudzelka (the maintainer) has not been active in the repository for some time, with PRs waiting to be merged. Does anybody have other means to contact Vital Kudzelka other than GitHub? He doesn't seem to be commenting/responding to issues and PRs for awhile (the last comment I can see is from Nov 6, 2016: https://github.com/pytest-dev/pytest-flask/issues/52#issuecomment-258704636, but it might be that I'm just not good at using GH's issue search). I've created an issue on the repository to track this discussion: https://github.com/pytest-dev/pytest-flask/issues/76 Any suggestions from the core devs of other actions that can/should be taken in situations like this, according to our pytest-dev policies? Cheers, Bruno. -------------- next part -------------- An HTML attachment was scrubbed... URL: From vital.kudzelka at gmail.com Tue Mar 27 08:16:51 2018 From: vital.kudzelka at gmail.com (Vital Kudzelka) Date: Tue, 27 Mar 2018 12:16:51 +0000 Subject: [pytest-dev] pytest-flask maintainer unreachable In-Reply-To: References: Message-ID: Hi Bruno, I'm going to review pending PRs and issues required my attention within this week. I'm trying to do my best and sorry for a long delay in responses. Cheers, Vital. On Tue, Mar 27, 2018 at 2:02 PM Bruno Oliveira wrote: > Hi everyone, > > People have expressed concern over pytest-flask because there has not been > a release in over 2 years (see > https://github.com/pytest-dev/pytest-flask/issues/72) and Vital Kudzelka > (the maintainer) has not been active in the repository for some time, with > PRs waiting to be merged. > > Does anybody have other means to contact Vital Kudzelka other than GitHub? > He doesn't seem to be commenting/responding to issues and PRs for awhile > (the last comment I can see is from Nov 6, 2016: > https://github.com/pytest-dev/pytest-flask/issues/52#issuecomment-258704636, > but it might be that I'm just not good at using GH's issue search). > > I've created an issue on the repository to track this discussion: > https://github.com/pytest-dev/pytest-flask/issues/76 > > Any suggestions from the core devs of other actions that can/should be > taken in situations like this, according to our pytest-dev policies? > > Cheers, > Bruno. > _______________________________________________ > pytest-dev mailing list > pytest-dev at python.org > https://mail.python.org/mailman/listinfo/pytest-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicoddemus at gmail.com Tue Mar 27 09:16:54 2018 From: nicoddemus at gmail.com (Bruno Oliveira) Date: Tue, 27 Mar 2018 13:16:54 +0000 Subject: [pytest-dev] pytest-flask maintainer unreachable In-Reply-To: References: Message-ID: Thanks a lot Vital, and sorry about contacting you off GitHub. Cheers, Bruno. On Tue, Mar 27, 2018 at 9:17 AM Vital Kudzelka wrote: > Hi Bruno, > > I'm going to review pending PRs and issues required my attention within > this week. I'm trying to do my best and sorry for a long delay in responses. > > Cheers, > Vital. > > On Tue, Mar 27, 2018 at 2:02 PM Bruno Oliveira > wrote: > >> Hi everyone, >> >> People have expressed concern over pytest-flask because there has not >> been a release in over 2 years (see >> https://github.com/pytest-dev/pytest-flask/issues/72) and Vital Kudzelka >> (the maintainer) has not been active in the repository for some time, with >> PRs waiting to be merged. >> >> Does anybody have other means to contact Vital Kudzelka other than >> GitHub? He doesn't seem to be commenting/responding to issues and PRs for >> awhile (the last comment I can see is from Nov 6, 2016: >> https://github.com/pytest-dev/pytest-flask/issues/52#issuecomment-258704636, >> but it might be that I'm just not good at using GH's issue search). >> >> I've created an issue on the repository to track this discussion: >> https://github.com/pytest-dev/pytest-flask/issues/76 >> >> Any suggestions from the core devs of other actions that can/should be >> taken in situations like this, according to our pytest-dev policies? >> >> Cheers, >> Bruno. >> > _______________________________________________ >> pytest-dev mailing list >> pytest-dev at python.org >> https://mail.python.org/mailman/listinfo/pytest-dev >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vital.kudzelka at gmail.com Tue Mar 27 09:34:49 2018 From: vital.kudzelka at gmail.com (Vital Kudzelka) Date: Tue, 27 Mar 2018 13:34:49 +0000 Subject: [pytest-dev] pytest-flask maintainer unreachable In-Reply-To: References: Message-ID: No problem, Bruno! Thank you for get in touch. > have you considered bringing more people to the team to help out with maintaining `pytest-flask`? I'm glad to extend the maintaining team if any of contributors would like to help. Cheers, Vital. On Tue, Mar 27, 2018 at 4:17 PM Bruno Oliveira wrote: > Thanks a lot Vital, and sorry about contacting you off GitHub. > > Cheers, > Bruno. > > On Tue, Mar 27, 2018 at 9:17 AM Vital Kudzelka > wrote: > >> Hi Bruno, >> >> I'm going to review pending PRs and issues required my attention within >> this week. I'm trying to do my best and sorry for a long delay in responses. >> >> Cheers, >> Vital. >> >> On Tue, Mar 27, 2018 at 2:02 PM Bruno Oliveira >> wrote: >> >>> Hi everyone, >>> >>> People have expressed concern over pytest-flask because there has not >>> been a release in over 2 years (see >>> https://github.com/pytest-dev/pytest-flask/issues/72) and Vital >>> Kudzelka (the maintainer) has not been active in the repository for some >>> time, with PRs waiting to be merged. >>> >>> Does anybody have other means to contact Vital Kudzelka other than >>> GitHub? He doesn't seem to be commenting/responding to issues and PRs for >>> awhile (the last comment I can see is from Nov 6, 2016: >>> https://github.com/pytest-dev/pytest-flask/issues/52#issuecomment-258704636, >>> but it might be that I'm just not good at using GH's issue search). >>> >>> I've created an issue on the repository to track this discussion: >>> https://github.com/pytest-dev/pytest-flask/issues/76 >>> >>> Any suggestions from the core devs of other actions that can/should be >>> taken in situations like this, according to our pytest-dev policies? >>> >>> Cheers, >>> Bruno. >>> >> _______________________________________________ >>> pytest-dev mailing list >>> pytest-dev at python.org >>> https://mail.python.org/mailman/listinfo/pytest-dev >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From fmorte at ya.ru Wed Mar 28 09:08:48 2018 From: fmorte at ya.ru (Oleg S) Date: Wed, 28 Mar 2018 16:08:48 +0300 Subject: [pytest-dev] Pytest hoodie In-Reply-To: <1863411521815560@web44g.yandex.ru> References: <73511521737062@web28j.yandex.ru> <1863411521815560@web44g.yandex.ru> Message-ID: <4492731522242528@web36g.yandex.ru> An HTML attachment was scrubbed... URL: From nicoddemus at gmail.com Wed Mar 28 09:23:38 2018 From: nicoddemus at gmail.com (Bruno Oliveira) Date: Wed, 28 Mar 2018 13:23:38 +0000 Subject: [pytest-dev] Pytest hoodie In-Reply-To: <4492731522242528@web36g.yandex.ru> References: <73511521737062@web28j.yandex.ru> <1863411521815560@web44g.yandex.ru> <4492731522242528@web36g.yandex.ru> Message-ID: Hi Oleg, That's awesome, thanks for sharing! :) Welcome to the project! Cheers, On Wed, Mar 28, 2018 at 10:11 AM Oleg S wrote: > Hi all! > I printed pytest hoodie :D > > Please, follow these links for images: > https://sun9-4.userapi.com/c831508/v831508192/beec8/HthxTa1did4.jpg > https://sun9-5.userapi.com/c840721/v840721192/6e0de/uu1lMTUTl5A.jpg > > Also, I made lather bracelet "trust me, I'm a qa engineer". :) > https://pp.userapi.com/c845120/v845120192/14e56/u7W7F927oQY.jpg > https://sun9-4.userapi.com/c824202/v824202192/f9d55/18FMgdHw5gc.jpg > > I'm happy to be a part of pytest :) > > Thanks! > > -- > Best regards, > Oleg > fmorte at ya.ru > > _______________________________________________ > pytest-dev mailing list > pytest-dev at python.org > https://mail.python.org/mailman/listinfo/pytest-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: