Current research in explainable AI is increasingly focused on integrating interpretability with practical applications across diverse domains. Recent work emphasizes the development of frameworks that combine reasoning with collaborative filtering to enhance recommendation systems, while also addressing the need for efficient and interpretable explanations in complex models. Techniques such as counterfactual training are being explored to improve the plausibility and actionability of model explanations, making them more relevant for real-world decision-making. Additionally, the introduction of agentic personas in knowledge graph-based explanations aims to tailor insights to specific user needs, enhancing the adaptability of AI systems in high-stakes environments like drug discovery. The field is also scrutinizing the reliability of multimodal explanations in face recognition, revealing challenges that necessitate more robust evaluation methods. Overall, the shift towards integrating interpretability with domain-specific requirements highlights a maturation in the field, aiming to create AI systems that are not only effective but also trustworthy and transparent.
We introduce ECSEL, an explainable classification method that learns formal expressions in the form of signomial equations, motivated by the observation that many symbolic regression benchmarks admit ...
Large Language Models (LLMs) exhibit potential for explainable recommendation systems but overlook collaborative signals, while prevailing methods treat recommendation and explanation as separate task...
AI explanation methods often assume a static user model, producing non-adaptive explanations regardless of expert goals, reasoning strategies, or decision contexts. Knowledge graph-based explanations,...
Gradient-based saliency methods such as Vanilla Gradient (VG) and Integrated Gradients (IG) are widely used to explain image classifiers, yet the resulting maps are often noisy and unstable, limiting ...
Concept Bottleneck Models (CBMs) ground predictions in human-understandable concepts but face fundamental limitations: the absence of a metric to pre-evaluate concept relevance, the "linearity problem...
Concept Bottleneck Models (CBMs) improve the explainability of black-box Deep Learning (DL) by introducing intermediate semantic concepts. However, standard CBMs often overlook domain-specific relatio...
Multimodal Large Language Models (MLLMs) have recently been proposed as a means to generate natural-language explanations for face recognition decisions. While such explanations facilitate human inter...
Concept Bottleneck Models (CBMs) aim for ante-hoc interpretability by learning a bottleneck layer that predicts interpretable concepts before the decision. State-of-the-art approaches typically select...
Explainable Artificial Intelligence (XAI) is increasingly essential as AI systems are deployed in critical fields such as healthcare and finance, offering transparency into AI-driven decisions. Two ma...
We propose a novel training regime termed counterfactual training that leverages counterfactual explanations to increase the explanatory capacity of models. Counterfactual explanations have emerged as...