Loading Now

Summary of View-based Explanations For Graph Neural Networks, by Tingyang Chen et al.


View-based Explanations for Graph Neural Networks

by Tingyang Chen, Dazhuo Qiu, Yinghui Wu, Arijit Khan, Xiangyu Ke, Yunjun Gao

First submitted to arxiv on: 4 Jan 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Databases (cs.DB)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper proposes a novel approach to generating explanations for graph neural networks (GNNs) called Graph Views for EXplanation (GVEX). The goal is to understand why a specific class label was assigned by a GNN-based classifier. The authors design a two-tier explanation structure, comprising graph patterns and induced explanation subgraphs, which concisely describes the fraction of graphs that best explains the assigned label. They also develop quality measures and formulate an optimization problem to compute optimal explanation views for GNN explanation. The authors show that the problem is Σ^2_P-hard and present two algorithms: one that follows an explain-and-summarize strategy, and another that performs a single-pass to an input node stream in batches.
Low GrooveSquid.com (original content) Low Difficulty Summary
GVEX helps us understand why graph neural networks make certain predictions by generating explanations for specific class labels. This is important because GNNs are used in many areas like social network analysis, computer vision, and natural language processing. The authors want to improve our understanding of how GNNs work so we can trust their predictions more. They do this by creating a special type of explanation called an “explanation view” that shows which parts of the graph are most important for a particular prediction.

Keywords

* Artificial intelligence  * Gnn  * Natural language processing  * Optimization