Portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 2 
Published in ICCV Workshop on Interpreting and Explaining Visual Artificial Intelligence Models, 2019
Model explanations based on pure observational data cannot compute the effects of features reliably, due to their inability to estimate how each factor alteration could affect the rest. We argue that explanations should be based on the causal model of the data and the derived intervened causal models, that represent the data distribution subject to interventions. With these models, we can compute counterfactuals, new samples that will inform us how the model reacts to feature changes on our input. We propose a novel explanation methodology based on Causal Counterfactuals and identify the limitations of current Image Generative Models in their application to counterfactual creation.
Recommended citation: Parafita, Á., & Vitrià, J. (2019, October). Explaining visual models by causal attribution. In 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW) (pp. 4167-4175). IEEE.
Download Paper
Published in 23rd International Conference of the Catalan Association for Artificial Intelligence (CCIA 2021), 2021
Causal Estimation is usually tackled as a two-step process: identification, to transform a causal query into a statistical estimand, and modelling, to compute this estimand by using data. This reliance on the derived statistical estimand makes these methods ad hoc, used to answer one and only one query. We present an alternative framework called Deep Causal Graphs: with a single model, it answers any identifiable causal query without compromising on performance, thanks to the use of Normalizing Causal Flows, and outputs complex counterfactual distributions instead of single-point estimations of their expected value. We conclude with applications of the framework to Machine Learning Explainability and Fairness.
Recommended citation: PARAFITA, Á., & VITRIÀ, J. (2021, October). Deep Causal Graphs for Causal Inference, Black-Box Explainability and Fairness. In Artificial Intelligence Research and Development: Proceedings of the 23rd International Conference of the Catalan Association for Artificial Intelligence (Vol. 339, p. 415). IOS Press.
Download Paper
Published in IEEE Xplore, 2022
Causal Queries are usually estimated by means of an estimand, a formula consisting of observational terms that can be computed using passive data. Each query results in a different formula, which makes estimand-based methods extremely ad-hoc. In this work, we propose an estimand-agnostic framework capable of computing any identifiable causal query on an arbitrary Causal Graph (even in the presence of latent confounders) with only one general model. We provide multiple implementations of this general framework that leverage the expressive power of Neural Networks and Normalizing Flows to model complex distributions, and we derive estimation procedures for all kinds of observational, interventional and counterfactual queries, valid for any kind of graph for which the query is identifiable. Finally, we test our techniques in a modelling setting and an estimation benchmark to show how, despite being a query-agnostic framework, it can compete with query-specific models. Our proposal includes an open-source library that allows easy application and extension of our techniques for researchers and practitioners alike.
Recommended citation: Parafita, A., & Vitria, J. (2022). Estimand-agnostic causal query estimation with deep causal graphs. IEEE Access, 10, 71370-71386.
Download Paper
Published in Workshop on Explainable Artificial Intelligence at IJCAI 2025, 2025
Understanding how information propagates through Transformer models is a key challenge for interpretability. In this work, we study the effects of minimal token perturbations on the embedding space. In our experiments, we analyze the frequency of which tokens yield to minimal shifts, highlighting that rare tokens usually lead to larger shifts. Moreover, we study how perturbations propagate across layers, demonstrating that input information is increasingly intermixed in deeper layers. Our findings validate the common assumption that the first layers of a model can be used as proxies for model explanations. Overall, this work introduces the combination of token perturbations and shifts on the embedding space as a powerful tool for model interpretability.
Recommended citation: Conti, E., Astruc, A., Parafita, A., & Brando, A. (2025). Probing the Embedding Space of Transformers via Minimal Token Perturbations. In Workshop on Explainable Artificial Intelligence at IJCAI 2025. arXiv preprint arXiv:2506.18011.
Download Paper
Published in The Thirty-Ninth Annual Conference on Neural Information Processing Systems (NeurIPS 2025), 2025
Among explainability techniques, SHAP stands out as one of the most popular, but often overlooks the causal structure of the problem. In response, do-SHAP employs interventional queries, but its reliance on estimands hinders its practical application. To address this problem, we propose the use of estimand-agnostic approaches, which allow for the estimation of any identifiable query from a single model, making do-SHAP feasible on complex graphs. We also develop a novel algorithm to significantly accelerate its computation at a negligible cost, as well as a method to explain inaccessible Data Generating Processes. We demonstrate the estimation and computational performance of our approach, and validate it on two real-world datasets, highlighting its potential in obtaining reliable explanations.
Recommended citation: Parafita, Á., Garriga, T., Brando, A., & Cazorla, F. J. (2025). Practical do-Shapley Explanations with Estimand-Agnostic Causal Inference. In The Thirty-ninth Annual Conference on Neural Information Processing Systems (NeurIPS 2025).
Download Paper | Download Slides
Published in Northern Lights Deep Learning Conference (NLDL 2026), 2026
Assessing the importance of individual features in Machine Learning is critical to understand the model’s decision-making process. While numerous methods exist, the lack of a definitive ground truth for comparison highlights the need for alternative, well-founded measures. This paper introduces a novel post-hoc local feature importance method called Counterfactual Importance Distribution (CID). We generate two sets of positive and negative counterfactuals, model their distributions using Kernel Density Estimation, and rank features based on a distributional dissimilarity measure. This measure, grounded in a rigorous mathematical framework, satisfies key properties required to function as a valid metric. We showcase the effectiveness of our method by comparing with well-established local feature importance explainers. Our method not only offers complementary perspectives to existing approaches, but also improves performance on faithfulness metrics (both for comprehensiveness and sufficiency), resulting in more faithful explanations of the system. These results highlight its potential as a valuable tool for model analysis.
Recommended citation: Conti, E., Parafita, Á., Brando, A. (2026). CID: Measuring Feature Importance Through Counterfactual Distributions. In Northern Lights Deep Learning Conference (NLDL 2026).
Published:
This is a description of your talk, which is a markdown file that can be all markdown-ified like any other post. Yay markdown!
Published:
This is a description of your conference proceedings talk, note the different field in type. You can put anything in this field.
Undergraduate course, University 1, Department, 2014
This is a description of a teaching experience. You can use markdown like any other post.
Workshop, University 1, Department, 2015
This is a description of a teaching experience. You can use markdown like any other post.