Summary of Error Analysis Of Shapley Value-based Model Explanations: An Informative Perspective, by Ningsheng Zhao et al.
Error Analysis of Shapley Value-Based Model Explanations: An Informative Perspective
by Ningsheng Zhao, Jia Yuan Yu, Krzysztof Dzieciolowski, Trang Bui
First submitted to arxiv on: 21 Apr 2024
Categories
- Main: Artificial Intelligence (cs.AI)
- Secondary: Machine Learning (cs.LG); Machine Learning (stat.ML)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary Shapley value attribution (SVA) is a popular explainable AI method that measures each feature’s contribution to a model’s output. However, existing SVA implementations have drawbacks, leading to biased or unreliable explanations. This paper proposes an error theoretical analysis framework to decompose explanation errors into observation bias and structural bias. The authors show that these biases are intertwined and can result in over-informative or underinformative explanations. They demonstrate the impact of distributional assumptions on existing SVA methods, proposing a measurement tool to quantify distribution drift. Experiments illustrate how different SVA methods can be over- or under-informative. This work sheds light on errors in estimating SVAs and encourages new, less error-prone approaches. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Shapley value attribution is an AI method that explains how features contribute to a model’s output. But existing ways of doing this have problems, making the explanations not very reliable. This paper looks at why these problems happen and proposes a way to fix them. They show that there are two types of errors: one caused by how we collect data and another caused by how we assume things should be. The authors also find out that some methods for doing SVA can be too detailed or not detailed enough. They use experiments to show how different methods work. This research helps us understand where the problems are and how we can make better explanations. |