Loading Now

Summary of Enhancing Size Generalization in Graph Neural Networks Through Disentangled Representation Learning, by Zheng Huang et al.


Enhancing Size Generalization in Graph Neural Networks through Disentangled Representation Learning

by Zheng Huang, Qihui Yang, Dawei Zhou, Yujun Yan

First submitted to arxiv on: 7 Jun 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
A novel framework, DISGEN, is proposed to enhance the size generalizability of graph neural networks (GNNs). The framework disentangles size factors from graph representations using size- and task-invariant augmentations and a decoupling loss. This approach minimizes shared information in hidden representations, with theoretical guarantees for its effectiveness. Empirical results show that DISGEN outperforms state-of-the-art models by up to 6% on real-world datasets.
Low GrooveSquid.com (original content) Low Difficulty Summary
Graph neural networks (GNNs) are great at learning from small graphs, but they often struggle when faced with larger ones. Existing methods don’t do a good job of removing size information from graph representations, which makes them less effective and reliant on other models. A new approach called DISGEN helps fix this problem by separating size factors from graph representations. It uses special tricks to make the model more robust and introduces a loss function that ensures it doesn’t share too much information. As a result, GNNs trained with DISGEN perform better on larger graphs than they do without it.

Keywords

» Artificial intelligence  » Loss function