-
Notifications
You must be signed in to change notification settings - Fork 145
Description
Similar to pytest-xdist: https://pytest-xdist.readthedocs.io/en/stable/how-to.html
I'm not sure yet how to implement this in mutmut, as the pytest setup is different (in pytest-xdist, session fixtures are run once per worker. Not sure how we could replicate this use case in mutmut)
I have a project which connects to a local postgres instance for integration tests. When running tests in parallel, they should connect to different databases on this instance, otherwise they would conflict with each other (e.g. test A deletes data from some table, while test B tries to read from this table). This means I currently cannot run these tests with mutmut.
The setup with pytest-xdist is:
- start e.g. 12 processes with
PYTEST_XDIST_WORKERset togw1,gw2, ..., gw12` - for each process, it runs the pytest session fixture to setup the database
2.1. The fixture connects to postgres and creates a databasetest_db_{PYTEST_XDIST_WORKER}
2.2. Afterwards, we have 12 different databases - When running a test, it runs against the
test_db_{PYTEST_XDIST_WORKER}database
Therefore, each worker uses its own database.
I think something like step (1) would be rather straightforward: We use fork and do not have explicit workers, but we run at most n processes in parallel. We could create n ids, keep track of which ids are still free, and everytime we fork another process we take one of the free ids.
However, I'm not sure how we would go about running session fixtures once per "worker". I suppose, we would could:
spawn(and notfork) 12 worker processes- set
MUTMUT_WORKER_1/2/.../12 - In each process run the complete test suite / an init method (for caching and running the database setup fixtures)
- Split the mutants across the workers (or use a multiprocessing queue/...)
- Inside of the worker, we sequentially run a single
forkto create isolated copies of the worker process (with everything already setup) and run tests for mutations there - Workers write their results into a results queue and the main process saves the stats in files (we should not write the
foo.py.metafiles from different processes)