Skip to content

Support parallel tests with external database #474

@Otto-AA

Description

@Otto-AA

Similar to pytest-xdist: https://pytest-xdist.readthedocs.io/en/stable/how-to.html

I'm not sure yet how to implement this in mutmut, as the pytest setup is different (in pytest-xdist, session fixtures are run once per worker. Not sure how we could replicate this use case in mutmut)

I have a project which connects to a local postgres instance for integration tests. When running tests in parallel, they should connect to different databases on this instance, otherwise they would conflict with each other (e.g. test A deletes data from some table, while test B tries to read from this table). This means I currently cannot run these tests with mutmut.

The setup with pytest-xdist is:

  1. start e.g. 12 processes with PYTEST_XDIST_WORKER set to gw1, gw2, ..., gw12`
  2. for each process, it runs the pytest session fixture to setup the database
    2.1. The fixture connects to postgres and creates a database test_db_{PYTEST_XDIST_WORKER}
    2.2. Afterwards, we have 12 different databases
  3. When running a test, it runs against the test_db_{PYTEST_XDIST_WORKER} database

Therefore, each worker uses its own database.

I think something like step (1) would be rather straightforward: We use fork and do not have explicit workers, but we run at most n processes in parallel. We could create n ids, keep track of which ids are still free, and everytime we fork another process we take one of the free ids.

However, I'm not sure how we would go about running session fixtures once per "worker". I suppose, we would could:

  1. spawn (and not fork) 12 worker processes
  2. set MUTMUT_WORKER_1/2/.../12
  3. In each process run the complete test suite / an init method (for caching and running the database setup fixtures)
  4. Split the mutants across the workers (or use a multiprocessing queue/...)
  5. Inside of the worker, we sequentially run a single fork to create isolated copies of the worker process (with everything already setup) and run tests for mutations there
  6. Workers write their results into a results queue and the main process saves the stats in files (we should not write the foo.py.meta files from different processes)

Metadata

Metadata

Assignees

No one assigned

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions