EvoMorph: Counterfactual Explanations for Uncertainty and Explainability on Time Series
Machine learning models achieve strong performance on time series data, yet their predictions are often difficult to interpret. In high-stakes domains such as healthcare, understanding how predictions could change, which signal characteristics influence them, and how reliable they are for a given input is crucial for building trust and supporting clinical decision-making.
EvoMorph addresses these challenges by generating counterfactual explanations for time series extrinsic regression models. Instead of arbitrarily perturbing individual features or time-domain signal values, EvoMorph searches for realistic alternative signals that would lead to a different model prediction. The framework formulates counterfactual generation as a multi-objective optimization problem, which is solved using evolutionary algorithms to balance several competing objectives simultaneously.
Beyond interpretability, EvoMorph can also be used to analyze model uncertainty. If small but plausible signal variations lead to widely varying predictions, this indicates high epistemic uncertainty and limited model support in that region. Conversely, stable predictions across many counterfactual variations suggest higher model confidence. In this way, EvoMorph provides a practical tool for probing prediction stability and identifying low data-density regions.
Overall, EvoMorph contributes to the development of reliable and interpretable AI systems for time series analysis by providing morphology-aware counterfactual explanations and enabling uncertainty analysis through counterfactual exploration.