[pytest-dev] xdist grouping: request for user stories

Bruno Oliveira nicoddemus at gmail.com
Thu Oct 23 22:32:31 CEST 2014


Hi Dj,

How do you know beforehand if a slave satisfies a requirement or not? Is
that based on fixtures? Can you provide more details, perhaps with some
sample code to illustrate it better?

One of the ideas mentioned by Holger would be the ability of controlling
test distribution based on fixtures as well, so tests that use a certain
fixture would automatically inherit those same properties. From what I
could gather, perhaps this would be a good fit for your case.

On Thu, Oct 23, 2014 at 4:30 PM, Dj Gilcrease <digitalxero at gmail.com> wrote:

>
> On Thu, Oct 23, 2014 at 7:46 AM, Bruno Oliveira <nicoddemus at gmail.com>
> wrote:
>
>> Hi all,
>>
>> On Thu, Oct 23, 2014 at 11:48 AM, holger krekel <holger at merlinux.eu>
>> wrote:
>> > Bruno Oliveira did a PR that satisfies his company's particular
>> requirements, in that GUI tests must run after all others and pinned to a
>> particular note.
>>
>> To describe my requirements in a little more detail:
>>
>> We have a lot of GUI tests and most (say, 95%) work in parallel without
>> any problems.
>>
>> A few of them though deal with window input focus, and those tests
>> specifically can't run in parallel with other tests. For example, a test
>> might:
>>
>> 1. create a widget and display it
>> 2. edit a gui control, obtaining input focus
>> 3. lose edit focus on purpose during the test, asserting that some other
>> behavior occurs in response to that
>>
>> When this type of test runs in parallel with others, sometimes the other
>> tests will pop up their own widgets and the first test will lose input
>> focus at an unexpected place, causing it to fail.
>>
>> Other tests take screenshots of 3d rendering windows for regression
>> testing against known "good" screenshots, and those are also susceptible to
>> the same problem, where a window poping up in front of another before a
>> screenshot might change the image and fail the test.
>>
>> In summary, some of our tests can't be run in parallel with any other
>> tests.
>>
>> The current workaround is that we use a special mark in those tests that
>> can't run in parallel and run pytest twice: a session in parallel excluding
>> marked tests, and another regular, non-parallel session with only the
>> marked tests. This works, but it is error prone when developers run the
>> tests from the command line and makes error report cumbersome to read (the
>> first problem is alleviated a bit with a custom script).
>>
>> Others have discussed other use cases in this thread:
>> https://bitbucket.org/hpk42/pytest/issue/175.
>>
>> My PR takes care to distribute marked tests to a single node after all
>> other tests have executed, but after some discussion it is clear that there
>> might be more use cases out there so we would like to hear them before
>> deciding what would be the best interface options.
>>
>> Cheers,
>>
>> On Thu, Oct 23, 2014 at 11:48 AM, holger krekel <holger at merlinux.eu>
>> wrote:
>>
>>> Hi all,
>>>
>>> Currently, pytest-xdist in load-balancing ("-n4") does not support
>>> grouping of tests or influencing how tests are distributed to nodes.
>>> Bruno Oliveira did a PR that satisfies his company's particular
>>> requirements,
>>> in that GUI tests must run after all others and pinned to a particular
>>> note.
>>> (https://bitbucket.org/hpk42/pytest-xdist/pull-request/11/)
>>>
>>> My question is what needs do you have wrt to test distribution with
>>> xdist?
>>> I'd like to collect and write down some user stories before we design
>>> options/mechanisms and then possibly get to implement them.  Your input
>>> is much appreciated.
>>>
>>> best and thanks,
>>> holger
>>> _______________________________________________
>>> pytest-dev mailing list
>>> pytest-dev at python.org
>>> https://mail.python.org/mailman/listinfo/pytest-dev
>>>
>>
>>
>> _______________________________________________
>> pytest-dev mailing list
>> pytest-dev at python.org
>> https://mail.python.org/mailman/listinfo/pytest-dev
>>
>>
>
> I think it would be useful to pin or exclude marks from individual nodes.
> We have marks like requires_abc, and right now we have logic that tests if
> the requirement exists before running tests and marks them as skip if it is
> not met, but I know beforehand which requirement can run on which nodes and
> would like to not even send the tests that cannot run to a node.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/pytest-dev/attachments/20141023/a2461e6b/attachment.html>


More information about the pytest-dev mailing list