Summary of Advancing Attribution-based Neural Network Explainability Through Relative Absolute Magnitude Layer-wise Relevance Propagation and Multi-component Evaluation, by Davor Vukadin et al.
Advancing Attribution-Based Neural Network Explainability through Relative Absolute Magnitude Layer-Wise Relevance Propagation and Multi-Component Evaluation
by Davor Vukadin, Petar Afrić, Marin Šilić, Goran Delač
First submitted to arxiv on: 12 Dec 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper introduces a novel method for determining the relevance of input neurons through layer-wise relevance propagation, addressing the shortcomings of current LRP formulations. The proposed approach is applied to the Vision Transformer architecture and evaluated on two image classification datasets: ImageNet and PascalVOC. Results demonstrate the advantage of this method over existing approaches. Furthermore, the paper discusses the limitations of current evaluation metrics for attribution-based explainability and proposes a new metric that combines faithfulness, robustness, and contrastiveness. This metric is used to evaluate various attribution-based methods. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper helps us understand how deep-learning models work better. Right now, these models are like black boxes – we don’t know why they make certain decisions. To fix this, researchers have developed ways to explain what the model is doing. One popular method is called Layer-Wise Relevance Propagation (LRP). But, there’s no single way to measure how well these methods work. This paper introduces a new approach that improves on existing LRP methods and shows it works better than other methods on certain image classification tasks. |
Keywords
» Artificial intelligence » Deep learning » Image classification » Vision transformer