Summary of How Interpretable Are Interpretable Graph Neural Networks?, by Yongqiang Chen et al.
How Interpretable Are Interpretable Graph Neural Networks?
by Yongqiang Chen, Yatao Bian, Bo Han, James Cheng
First submitted to arxiv on: 12 Jun 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Machine Learning (stat.ML)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper proposes a novel framework for interpretable subgraph learning using multilinear extensions, which they call SubMT. The authors find that existing graph neural networks (XGNNs) have limitations in approximating SubMT, leading to reduced interpretability of extracted subgraphs. To address this issue, they design a new XGNN architecture called Graph Multilinear neT (GMT), which is theoretically more powerful in approximating SubMT. The authors validate their findings on various graph classification benchmarks, showing that GMT outperforms state-of-the-art models by up to 10% in terms of both interpretability and generalizability. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper helps us understand how computers can learn important parts from big graphs, like social networks or molecules. Right now, we have ways to do this, but they’re not very good at explaining why they chose those parts. The authors came up with a new way to do this called SubMT, and then designed a special computer program (called GMT) that’s better at figuring out the important parts. They tested it on lots of different graphs and found that it worked really well! |
Keywords
» Artificial intelligence » Classification