Loading Now

Summary of Incorporating Retrieval-based Causal Learning with Information Bottlenecks For Interpretable Graph Neural Networks, by Jiahua Rao et al.


Incorporating Retrieval-based Causal Learning with Information Bottlenecks for Interpretable Graph Neural Networks

by Jiahua Rao, Jiancong Xie, Hanjing Lin, Shuangjia Zheng, Zhen Wang, Yuedong Yang

First submitted to arxiv on: 7 Feb 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Graph Neural Networks (GNNs) have revolutionized topological data processing, but their interpretability remains a significant challenge. Current methods rely on post-hoc explanations, which struggle with complex subgraphs and cannot enhance GNN predictions. To address this limitation, we propose an interpretable causal GNN framework that combines retrieval-based causal learning with Graph Information Bottleneck (GIB) theory. Our approach semi-parametrically retrieves crucial subgraphs detected by GIB and compresses explanatory subgraphs via a causal module. This novel framework outperforms state-of-the-art methods, achieving 32.71% higher precision on real-world explanation scenarios with diverse explanation types. Furthermore, the learned explanations can improve GNN prediction performance.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about making Graph Neural Networks (GNNs) more understandable and better at predicting things. Right now, people use methods to explain how GNNs work after they’ve made predictions. However, these methods don’t do well with complicated situations or help the GNNs make better predictions. The researchers want to create a new way that connects explaining what GNNs do and improving their predictions. They developed an innovative approach that uses two techniques: one to find important parts of the data and another to simplify explanations. This method outperformed other methods and showed that the explanations can actually help improve prediction accuracy.

Keywords

* Artificial intelligence  * Gnn  * Precision