Summary of Faithful and Accurate Self-attention Attribution For Message Passing Neural Networks Via the Computation Tree Viewpoint, by Yong-min Shin et al.
Faithful and Accurate Self-Attention Attribution for Message Passing Neural Networks via the Computation Tree Viewpoint
by Yong-Min Shin, Siqing Li, Xin Cao, Won-Yong Shin
First submitted to arxiv on: 7 Jun 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI); Information Theory (cs.IT); Neural and Evolutionary Computing (cs.NE); Social and Information Networks (cs.SI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The self-attention mechanism has been integrated into various popular message passing neural networks (MPNNs), allowing the model to adaptively control information flow along graph edges. Attention-based MPNNs (Att-GNNs) have also served as a baseline for multiple studies on explainable AI (XAI), with attention being seen as a natural model interpretation. However, existing studies often employ naive calculations to derive attribution scores from attention, undermining the potential of attention as an explanation method for Att-GNNs. This study aims to bridge the gap between widespread usage and potential explainability via attention. We propose GATT, an edge attribution calculation method based on the computation tree, which reflects the underlying model’s computation process. Our empirical results demonstrate the effectiveness of GATT in faithfulness, explanation accuracy, and case studies using both synthetic and real-world benchmark datasets. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The researchers are working on a new way to understand how AI models make decisions. They’re taking an idea that works well for natural language processing and computer vision and applying it to something called message passing neural networks (MPNNs). These MPNNs help machines learn from complex data like social media or sensor readings. The goal is to figure out why the model makes certain decisions, which can be important for things like self-driving cars or medical diagnosis. Right now, scientists are using a simple method that’s not very good at explaining what the model did. This study proposes a new way to calculate how much different parts of the data contributed to the decision. They test this approach on some sample datasets and show it works better than the old way. |
Keywords
» Artificial intelligence » Attention » Natural language processing » Self attention