Skip to content

ENH: Add epochs.score_quality() native data-driven epoch quality scoring #13676

@aman-coder03

Description

@aman-coder03

Describe the new feature or enhancement

the docs themselves say "some trial-and-error may be necessary" when setting rejection thresholds which is honest. I would like to add a score_quality() method on Epochs that gives each epoch a relative outlier score based on its statistical properties, so users have a data-driven starting point instead of blind guessing

Describe your proposed implementation

a new Epochs.score_quality() method using only NumPy no new dependencies. It would compute per-epoch features MNE already handles internally(peak to peak,variance,kurtosis), z-score them robustly across epochs, and return a simple score array. Optionally suggest reject= threshold values and/or plot epochs ranked by score.

open question: should this live as a method on Epochs, or a standalone function? I leaned toward a method for API consistency but happy to hear other thoughts

Describe possible alternatives

autoreject is the obvious one, but it's a separate install and heavier than what most users need for a quick data check. The gap between "guess a number" and "run a full autoreject pipeline" is what this targets

could also be folded into drop_bad() directly though I think keeping scoring and dropping as separate steps is better for reproducibility

Additional context

happy to implement this if there's interest, just want to check the idea before writing code. Would also love input on whether suggest_reject=True is useful or scope creep

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions