Loading Now

Summary of A Differential Geometric View and Explainability Of Gnn on Evolving Graphs, by Yazheng Liu et al.


A Differential Geometric View and Explainability of GNN on Evolving Graphs

by Yazheng Liu, Xi Zhang, Sihong Xie

First submitted to arxiv on: 11 Mar 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper proposes a novel approach to understanding how Graph Neural Networks (GNNs) respond to evolving graphs. The authors develop a smooth parameterization of GNN-predicted distributions using axiomatic attribution, allowing for modeling of distributional evolution as smooth curves on a low-dimensional manifold within a high-dimensional embedding space. The proposed method is designed to be convex and optimized for interpretability, outperforming state-of-the-art methods in node classification, link prediction, and graph classification tasks.
Low GrooveSquid.com (original content) Low Difficulty Summary
Imagine you’re trying to understand how computers learn from changing social networks or biochemical pathways. This paper helps machines do a better job of predicting what will happen when these networks change over time. They create a new way to analyze the results of machine learning models that accounts for how the networks are changing. The method is designed to be easy to interpret and understand, which is important for making decisions based on the predictions.

Keywords

* Artificial intelligence  * Classification  * Embedding space  * Gnn  * Machine learning