Summary of Diskgnn: Bridging I/o Efficiency and Model Accuracy For Out-of-core Gnn Training, by Renjie Liu et al.
DiskGNN: Bridging I/O Efficiency and Model Accuracy for Out-of-Core GNN Training
by Renjie Liu, Yichuan Wang, Xiao Yan, Haitian Jiang, Zhenkun Cai, Minjie Wang, Bo Tang, Jinyang Li
First submitted to arxiv on: 8 May 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: None
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary A new system called DiskGNN is designed to train graph neural networks (GNNs) on large graphs that exceed CPU memory, eliminating the need for read amplification or degraded model accuracy. By using offline sampling, DiskGNN decouples graph sampling from model computation and packs node features contiguously on disk to avoid read amplification. The system also incorporates a four-level feature store to cache node features in memory, batched packing to accelerate feature packing, and pipelined training to overlap disk access with other operations. Compared to state-of-the-art systems Ginex and MariusGNN, DiskGNN can speed up training by over 8x while matching their best model accuracy. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary DiskGNN is a new way to train graph neural networks (GNNs) on really big graphs that don’t fit in computer memory. It’s a problem because it makes the training slower and not as accurate. DiskGNN solves this by doing some special tricks, like packing the information so it can be accessed quickly, and using memory to store important data. This new system is much faster than other similar systems, making it useful for many applications. |