Loading Now

Summary of Simple Graph Condensation, by Zhenbang Xiao et al.


Simple Graph Condensation

by Zhenbang Xiao, Yu Wang, Shunyu Liu, Huiqiong Wang, Mingli Song, Tongya Zheng

First submitted to arxiv on: 22 Mar 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Social and Information Networks (cs.SI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper proposes a simplified approach to graph condensation, which enables the training of Graph Neural Networks (GNNs) on small condensed graphs for use on large-scale original graphs. The traditional methods focus on aligning complex metrics such as gradients and output distributions between the condensed and original graphs, but these intricate external parameters can disrupt the optimization process and make the condensation process unstable. To address this issue, the authors introduce the Simple Graph Condensation (SimGC) framework, which aligns the condensed graph with the original graph from the input layer to the prediction layer, guided by a pre-trained Simple Graph Convolution (SGC) model on the original graph. This straightforward yet effective strategy achieves a significant speedup of up to 10 times compared to existing graph condensation methods while performing on par with state-of-the-art baselines.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper simplifies the process of training Graph Neural Networks (GNNs) on big graphs by using a smaller, condensed version. Right now, people are focusing on making sure that this condensed version works well with the original graph by matching things like gradients and output distributions. But this can be tricky and make it hard to get the condensation process to work correctly. The authors of this paper want to change this by introducing a new way to do graph condensation that’s simpler and more stable. They call this method Simple Graph Condensation (SimGC) and it uses a pre-trained model to help guide the process. This new approach is faster, up to 10 times faster, but still performs just as well as other methods.

Keywords

* Artificial intelligence  * Optimization