Loading Now

Summary of Shapg: New Feature Importance Method Based on the Shapley Value, by Chi Zhao et al.


ShapG: new feature importance method based on the Shapley value

by Chi Zhao, Jing Liu, Elena Parilina

First submitted to arxiv on: 29 Jun 2024

Categories

  • Main: Artificial Intelligence (cs.AI)
  • Secondary: Computer Science and Game Theory (cs.GT); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed Explainable Artificial Intelligence (XAI) method, ShapG, provides model-agnostic global explanations by calculating feature importance using Shapley values. ShapG defines an undirected graph based on the dataset, where nodes represent features and edges are added based on correlation coefficients. This structure is used to approximate Shapley values through sampling, reducing computational complexity. Comparing ShapG with other XAI methods shows it provides more accurate explanations for two datasets, while also exhibiting faster running times. Extensive experiments demonstrate the wide applicability of ShapG for explaining complex models. As a result, ShapG is an important tool for improving explainability and transparency in AI systems.
Low GrooveSquid.com (original content) Low Difficulty Summary
ShapG is a new way to make artificial intelligence (AI) more understandable and transparent. It helps by showing which features are most important for making predictions or decisions. To do this, ShapG creates a special kind of map based on the data, where each node represents a feature and edges connect features that work together. Then, it uses this map to figure out how important each feature is, which takes less time than other methods. Tests show that ShapG does better than other methods at explaining AI models, making it useful for many fields.

Keywords

* Artificial intelligence