[pytest-dev] parameters and fixtures, storage and benchmarks: feedback appreciated

Sylvain MARIE sylvain.marie at se.com
Mon Nov 5 16:25:36 EST 2018


Dear pytest development team

In our data science team we develop a lot of machine learning/statistics components and a recurrent need is benchmarking codes against many reference datasets. By benchmarking we often mean "applicative" benchmarking (such as algorithm accuracy), even if execution time may also be looked at as a secondary target. For this reason https://github.com/ionelmc/pytest-benchmark does not seem to suit our needs.

I developed several versions of a benchmarking toolkit in the past years for this purpose but I managed to port it entirely to pytest, so as to leverage all the great built-in mechanisms (namely, parameters and fixtures).

It is mostly for this reason that I broke down the problem into smaller pieces and open-sourced each piece as a separate mechanism. You already know 
pytest-cases (to separate the test logic from the test cases/datasets) 
and pytest-steps (to break down the test logic into pieces while keeping it quite readable), 
the last piece I completed is about storing test results so as to get a true "benchmark" functionality with synthesis reports at the end.

I broke down the mechanisms into two parts and would like to have your opinion:
- one part will be a decorator for function-scoped fixtures, so as to say "please store all these fixture's instances" (in a dictionary-like storage object under key=test id)
- another part will be a special fixture, decorated with the above, allowing people to easily inject "results bags" into their test functions to save results and retrieve them at the end

Do you think that this (in particular the first item to store fixture) is compliant with pytest ? From investigating, I found out that params are currently stored in session but not fixtures (purposedly). It therefore seems natural to propose a way for users to store very specific fixtures on a declarative basis.

Concerning the place where to store them (the global storage object), I am currently allowing users either to rely on a global variable or on a fixture. Would there be another preferred way to store information in pytest, across all tests ?

Of course compliance with pytest-xdist will be a holy grail in the future, I do not want to shoot at it directly but pick the 'least bad' choices.

Thanks in advance for your kind review and advices !
Best regards

--
Sylvain


More information about the pytest-dev mailing list