Summary of On Gnn Explanability with Activation Rules, by Luca Veyrin-forrer et al.
On GNN explanability with activation rules
by Luca Veyrin-Forrer, Ataollah Kamal, Stefan Duffner, Marc Plantevit, Céline Robardet
First submitted to arxiv on: 17 Jun 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: None
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary GNNs are powerful models that excel at machine learning tasks related to graphs. However, their deployment is hindered by societal concerns about trustworthiness and transparency. To address this, researchers propose mining activation rules in hidden layers to understand how GNNs perceive the world. The goal is not to discover highly discriminatory individual rules but rather a small set of rules that cover all input graphs. This involves introducing the subjective activation pattern domain and defining an algorithm to enumerate activations rules in each hidden layer. The approach uses information theory to quantify the interest of these rules and account for background knowledge on the input graph data. The resulting activation rules can be redescribed using interpretable features, providing insights into the characteristics used by the GNN to classify graphs. This enables identifying hidden features built by the GNN through its different layers and explaining GNN decisions. Experiments show highly competitive performance with up to 200% improvement in fidelity over SOTA methods. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary GNNs are powerful models that help computers understand graph-related problems. However, people don’t always trust these models because they’re not clear about how they work. To make things clearer, researchers came up with an idea to “decode” the inner workings of GNNs. They want to figure out what characteristics the model uses to classify graphs and why it makes certain decisions. To do this, they developed a way to extract simple rules from the hidden layers of the GNN. These rules can be understood by humans and help explain how the model works. The researchers tested their approach on both fake and real datasets and found that it performed better than other methods in terms of explaining graph classification. |
Keywords
* Artificial intelligence * Classification * Gnn * Machine learning