Summary of Explainable Graph Neural Networks Under Fire, by Zhong Li et al.
Explainable Graph Neural Networks Under Fire
by Zhong Li, Simon Geisler, Yuhang Wang, Stephan Günnemann, Matthijs van Leeuwen
First submitted to arxiv on: 10 Jun 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper explores the limitations of graph neural network (GNN) explanation methods, which aim to provide interpretability for GNN predictions in decision-critical applications. Most current methods work post-hoc and offer explanations as a subset of important edges or nodes. However, this study reveals that these explanations are highly susceptible to adversarial perturbations, even small changes that preserve the model’s predictions. This raises concerns about the trustworthiness and practical utility of these explanation methods for GNNs. To combat this issue, the authors introduce GXAttack, a novel optimization-based white-box attack method specifically designed for post-hoc GNN explanations under adversarial settings. The paper highlights the need for robust evaluation of future GNN explainers to ensure their effectiveness. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The paper looks at how we can trust graph neural network (GNN) predictions, which are often hard to understand. Most methods try to explain why a GNN made a certain prediction by pointing out important parts of the graph. But this study shows that these explanations are not reliable because small changes to the original graph can greatly alter the explanation. This is a problem because we need trustworthy explanations for making good decisions with GNNs. To address this issue, the authors created a new method called GXAttack that can manipulate post-hoc GNN explanations to test their reliability. |
Keywords
» Artificial intelligence » Gnn » Graph neural network » Optimization