-
-
Notifications
You must be signed in to change notification settings - Fork 1.5k
Description
Describe the new feature or enhancement
the docs themselves say "some trial-and-error may be necessary" when setting rejection thresholds which is honest. I would like to add a score_quality() method on Epochs that gives each epoch a relative outlier score based on its statistical properties, so users have a data-driven starting point instead of blind guessing
Describe your proposed implementation
a new Epochs.score_quality() method using only NumPy no new dependencies. It would compute per-epoch features MNE already handles internally(peak to peak,variance,kurtosis), z-score them robustly across epochs, and return a simple score array. Optionally suggest reject= threshold values and/or plot epochs ranked by score.
open question: should this live as a method on Epochs, or a standalone function? I leaned toward a method for API consistency but happy to hear other thoughts
Describe possible alternatives
autoreject is the obvious one, but it's a separate install and heavier than what most users need for a quick data check. The gap between "guess a number" and "run a full autoreject pipeline" is what this targets
could also be folded into drop_bad() directly though I think keeping scoring and dropping as separate steps is better for reproducibility
Additional context
happy to implement this if there's interest, just want to check the idea before writing code. Would also love input on whether suggest_reject=True is useful or scope creep