Loading Now

Summary of Sf-gnn: Self Filter For Message Lossless Propagation in Deep Graph Neural Network, by Yushan Zhu et al.


SF-GNN: Self Filter for Message Lossless Propagation in Deep Graph Neural Network

by Yushan Zhu, Wen Zhang, Yajing Xu, Zhen Yao, Mingyang Chen, Huajun Chen

First submitted to arxiv on: 3 Jul 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
A novel Graph Neural Network (GNN) approach is introduced to address the performance degradation issue in deep GNNs. Unlike traditional explanations, this paper proposes that interference from low-quality node representations during message propagation is the root cause of the problem. To tackle this, a simple and general method called SF-GNN is presented. SF-GNN defines two node representations: one for the node’s feature itself and another for propagating messages to neighbor nodes. A self-filter module evaluates the quality of the node representation and decides whether to integrate it into message propagation based on its assessment. The proposed method demonstrates state-of-the-art performance on various GNN models, including homogeneous and heterogeneous graphs, as well as knowledge graphs.
Low GrooveSquid.com (original content) Low Difficulty Summary
A new idea in Graph Neural Networks (GNNs) is presented that helps deep GNNs work better. Right now, stacking GNN layers can actually make things worse instead of improving them. The problem is that low-quality node representations get mixed up during message propagation, making the model perform poorly. To fix this, a simple new method called SF-GNN is introduced. It has two types of node representations: one for the node itself and another for passing messages to neighbor nodes. A special module checks how good each representation is and decides whether to use it or not. This helps deep GNNs work better on different types of graphs, like those with similar or very different information.

Keywords

* Artificial intelligence  * Gnn  * Graph neural network