Match subnets in O(n^3) via linear_sum_assignment.#795
Match subnets in O(n^3) via linear_sum_assignment.#795anntzer wants to merge 1 commit intosoft-matter:masterfrom
Conversation
To solve subnets, instead of brute-forcing the n! combinations, use scipy.optimize.linear_sum_assignment (commonly known as the Hungarian/Munkres algorithm, although scipy actually uses a different algorithm) which provides a solution in O(n^3). This is not an original idea; see e.g. https://imagej.net/imagej-wiki-static/TrackMate_Algorithms.html#Solving_LAP Locally, this method solves the "slow" example in adaptive-search.ipynb (`tracks_regular = trackpy.link_df(cg, 0.75)`) in ~50ms instead of 1min. linear_sum_assignment ("lsa") was extensively benchmarked in scipy PR#12541 (which was most about adding a *sparse* variant, but one can just look at the performance of lsa), which shows sub-second runtimes for thousands of inputs. This is also why this PR fully skips the use of MAX_SUB_NET (at most, one may consider setting an alternate value in the thousands for this strategy...). The docs will need to be updated, but this PR is just to discuss the implementation.
|
I pulled this PR locally to try it out, and while I can see a slight difference in speed when running on individual files, it's not much different than numba when running on larger samples of my own data. Just to make sure I'm not missing something, is it enough to pass |
Yes. (Or in fact leaving the default value also works, if you don't have numba installed, because I changed the default only in that case.)
Yes, this is specifically better in the case where you have a lot of features that are within search_range of each other, but which all move together in some coordinated fashion (e.g., the "slow" example at https://soft-matter.github.io/trackpy/v0.7/tutorial/adaptive-search.html). In my specific use case, this occurs because I have multiple features that are all detected on a single larger solid object, these features simply track the solid's motion, and the spacing between the features is similar to the distance by which the object (and thus the features) moves from one frame to the next. |
To solve subnets, instead of brute-forcing the n! combinations, use scipy.optimize.linear_sum_assignment (commonly known as the Hungarian/Munkres algorithm, although scipy actually uses a different algorithm) which provides a solution in O(n^3). This is not an original idea; see e.g. #450 and
https://imagej.net/imagej-wiki-static/TrackMate_Algorithms.html#Solving_LAP
Locally, this method solves the "slow" example in adaptive-search.ipynb (
tracks_regular = trackpy.link_df(cg, 0.75)) in ~50ms instead of 1min. linear_sum_assignment ("lsa") was extensively benchmarked in scipy PR#12541 (which was most about adding a sparse variant, but one can just look at the performance of lsa), which shows sub-second runtimes for thousands of inputs. This is also why this PR fully skips the use of MAX_SUB_NET (at most, one may consider setting an alternate value in the thousands for this strategy...).(In fact, it may perhaps(?) be possible to get even better performance via by completely dropping the subnet paradigm and just passing the whole distance matrix (sparsified by removing edges greater than the search range) to scipy.sparse.csgraph.min_weight_full_bipartite_matching, but 1) one needs to figure out how to handle unmatched points (maybe by adding "dummy" points with a high but finite link cost), and 2) this would require more massive code surgery anyways. Edit: Maybe this doesn't actually work if we want to maintain the current approach of having too-long edges within a subnet be set to search_range**2.)
The docs will need to be updated, but this PR is just to discuss the implementation.