Loading Now

Summary of From Shap Scores to Feature Importance Scores, by Olivier Letoffe et al.


From SHAP Scores to Feature Importance Scores

by Olivier Letoffe, Xuanxiang Huang, Nicholas Asher, Joao Marques-Silva

First submitted to arxiv on: 20 May 2024

Categories

  • Main: Artificial Intelligence (cs.AI)
  • Secondary: Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This research paper investigates the relationship between feature attribution and a priori voting power in Explainable Artificial Intelligence (XAI). The study highlights the limitations of existing methods, such as SHAP and LIME, which can produce misleading relative feature importance. The authors propose novel desirable properties for Feature Importance Scores (FISs) that should be exhibited in XAI by feature attribution. They also introduce new FISs that meet these criteria. The paper conducts a rigorous analysis of the best-known power indices to determine their suitability for XAI applications. By leveraging game-theoretical foundations and logic-based definitions, this research aims to improve the accuracy and reliability of feature attributions in ML models.
Low GrooveSquid.com (original content) Low Difficulty Summary
This study explores how we can explain artificial intelligence models better. Currently, some methods like SHAP and LIME can be misleading. The researchers want to create new ways to measure which features are most important in a model’s predictions. They propose new rules for these measurements and introduce new methods that follow these rules. They also test these methods against existing ones to see which is best for explaining AI models.

Keywords

» Artificial intelligence