Explaining with examples is an intuitive way to justify AI decisions. However, it is challenging to understand how a decision value should change relative to the examples with many features differing by large amounts. We draw from real estate valuation that uses Comparables—examples with known values for comparison. Estimates are made more accurate by hypothetically adjusting the attributes of each Comparable and correspondingly changing the value based on factors. We propose Comparables XAI for relatable example-based explanations of AI with Trace adjustments that trace counterfactual changes from each Comparable to the Subject, one attribute at a time, monotonically along the AI feature space. In modelling and user studies, Trace-adjusted Comparables achieved the highest XAI faithfulness and precision, user accuracy, and narrowest uncertainty bounds compared to linear regression, linearly adjusted Comparables, or unadjusted Comparables. This work contributes a new analytical basis for using example-based explanations to improve user understanding of AI decisions.
Speaker Info:
Yifan Zhang is a Ph.D. student at SoC, National University of Singapore , working in the UbiComp Lab under the supervision of Prof. Brian Y. Lim. Her research interests include Explainable Artificial Intelligence (XAI) and Human-Computer Interaction (HCI). Specifically, She focuses on designing and developing explainable AI techniques to improve the transparency of complex AI model decision-making processes. Guided by human-centered principles, her work aims to provide cognitively accessible explanations that enhance overall interpretability.