Summary of Choose Your Explanation: a Comparison Of Shap and Gradcam in Human Activity Recognition, by Felix Tempel et al.
Choose Your Explanation: A Comparison of SHAP and GradCAM in Human Activity Recognition
by Felix Tempel, Daniel Groos, Espen Alexander F. Ihlen, Lars Adde, Inga Strümke
First submitted to arxiv on: 20 Dec 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper presents a comparative analysis of two widely used explainability methods, Shapley Additive Explanations (SHAP) and Gradient-weighted Class Activation Mapping (GradCAM), for graph convolutional networks (GCNs) in human activity recognition (HAR). The study evaluates these methods on skeleton-based data from two real-world datasets, including a healthcare-critical cerebral palsy (CP) case. The results provide insights into the strengths, limitations, and differences between SHAP and GradCAM, offering a roadmap for selecting the most appropriate explanation method based on specific models and applications. The evaluation focuses on feature importance ranking, interpretability, and model sensitivity through perturbation experiments. While SHAP provides detailed input feature attribution, GradCAM delivers faster, spatially oriented explanations, making both methods complementary depending on the application’s requirements. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper helps explain how artificial intelligence (AI) models work so we can understand why they make certain decisions. This is especially important in areas like healthcare where AI is used to predict outcomes and make life-or-death choices. The researchers compared two popular ways of explaining AI model decisions, SHAP and GradCAM, to see which one works best for different types of data and applications. They tested these methods on real-world data from the medical field, including a case involving cerebral palsy patients. The study shows that both SHAP and GradCAM have their strengths and weaknesses, but together they can provide more understandable and useful explanations. |
Keywords
» Artificial intelligence » Activity recognition