Summary of Spangnn: Towards Memory-efficient Graph Neural Networks Via Spanning Subgraph Training, by Xizhi Gu et al.
SpanGNN: Towards Memory-Efficient Graph Neural Networks via Spanning Subgraph Training
by Xizhi Gu, Hongzheng Li, Shihong Gao, Xinyan Zhang, Lei Chen, Yingxia Shao
First submitted to arxiv on: 7 Jun 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed SpanGNN method is a new approach to efficiently train Graph Neural Networks (GNNs) while minimizing memory usage and maintaining high accuracy. Unlike traditional full-graph GNN training, which suffers from large peak memory usage and Out-of-Memory problems when handling large graphs, SpanGNN trains GNN models over a sequence of spanning subgraphs, incrementally updating the subgraph between epochs to reduce memory consumption. To ensure model performance, two edge sampling strategies are introduced: variance-reduced and noise-reduced, which help select high-quality edges for the GNN learning process. Experimental results on widely used datasets demonstrate SpanGNN’s advantages in terms of both model performance and low peak memory usage. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Graph Neural Networks (GNNs) are really good at understanding graph data! But training them can be a problem when dealing with big graphs because it uses too much memory and might even run out. One solution is to break the graph into smaller pieces, but this makes the results less accurate. The new SpanGNN method tries to fix this by creating smaller “subgraphs” that are updated little by little. This helps reduce the amount of memory needed while still keeping the accuracy high. The team tested it on some big datasets and showed that it’s a better way to train GNNs. |
Keywords
» Artificial intelligence » Gnn