Counterfactual Explanations with Probabilistic Guarantees on their Robustness to Model Change
Abstract Counterfactual explanations (CFEs) guide users on how to adjust inputs to machine learning models to achieve desired outputs. While existing research primarily addresses static scenarios, real-world applications often involve data or model changes, potentially invalidating previously generated CFEs and rendering user-induced input changes ineffective. Current methods addressing this issue often support only specific models or change types, require extensive hyperparameter tuning, or fail to provide probabilistic guarantees on CFE robustness to model changes. This paper proposes a novel approach for generating CFEs that provides probabilistic guarantees for any model and change type, while offering interpretable and easy-to-select hyperparameters. We establish a theoretical framework for probabilistically defining robustness to model change and demonstrate how our BetaRCE method directly stems from it. BetaRCE is a post-hoc method applied alongside a chosen base CFE generation method to enhance the quality of the explanation beyond robustness. It facilitates a transition from the base explanation to a more robust one with user-adjusted probability bounds. Through experimental comparisons with baselines, we show that BetaRCE yields robust, most plausible, and closest to baseline counterfactual explanations.
Links
Cite as
@inproceedings{
stepka2024cfeprob,
title={Counterfactual Explanations with Probabilistic Guarantees on their Robustness to Model Change},
author={Ignacy St\k{e}pka and Mateusz Lango and Jerzy Stefanowski},
booktitle={31st SIGKDD Conference on Knowledge Discovery and Data Mining - Research Track (August 2024 Deadline)},
year={2024},
url={https://arxiv.org/abs/2408.04842}
}