Explaining visual models by causal attribution
Published in ICCV Workshop on Interpreting and Explaining Visual Artificial Intelligence Models, 2019
Abstract: Model explanations based on pure observational data cannot compute the effects of features reliably, due to their inability to estimate how each factor alteration could affect the rest. We argue that explanations should be based on the causal model of the data and the derived intervened causal models, that represent the data distribution subject to interventions. With these models, we can compute counterfactuals, new samples that will inform us how the model reacts to feature changes on our input. We propose a novel explanation methodology based on Causal Counterfactuals and identify the limitations of current Image Generative Models in their application to counterfactual creation.
Recommended citation: Parafita, Á., & Vitrià, J. (2019, October). Explaining visual models by causal attribution. In 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW) (pp. 4167-4175). IEEE.
Download Paper