Summary of Leveraging Local Structure For Improving Model Explanations: An Information Propagation Approach, by Ruo Yang et al.
Leveraging Local Structure for Improving Model Explanations: An Information Propagation Approach
by Ruo Yang, Binghui Wang, Mustafa Bilgic
First submitted to arxiv on: 24 Sep 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper proposes a novel method called IProp to improve the interpretability of deep neural network (DNN) model decisions, specifically for image classification tasks. Traditional explanation methods attribute scores independently to each pixel in an image, whereas humans and DNNs consider relationships between nearby pixels when making predictions. IProp addresses this limitation by modeling each pixel’s attribution score as a source of explanatory information and propagating it dynamically across all pixels using the Markov Reward Process. This ensures convergence and provides final attribution scores for desired pixels. IProp is compatible with existing attribution-based explanation methods and significantly outperforms them on various interpretability metrics in extensive experiments. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Imagine trying to understand how a computer sees an image. Most computers today can recognize objects, but it’s hard to know why they made certain predictions. Researchers have been working on ways to explain these decisions, but most methods look at each pixel independently, like looking at individual building blocks in a puzzle. Humans and computers actually consider the relationships between these building blocks when making decisions. This new method called IProp is designed to improve this understanding by considering how information flows from one pixel to another. It’s like following the trail of clues to understand why the computer made a certain prediction. |
Keywords
» Artificial intelligence » Image classification » Neural network