Summary of Sampling-based Distributed Training with Message Passing Neural Network, by Priyesh Kakka et al.
Sampling-based Distributed Training with Message Passing Neural Network
by Priyesh Kakka, Sheel Nidhan, Rishikesh Ranade, Jay Pathak, Jonathan F. MacArt
First submitted to arxiv on: 23 Feb 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Distributed, Parallel, and Cluster Computing (cs.DC); Fluid Dynamics (physics.flu-dyn)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper introduces a novel approach for scaling edge-based graph neural networks as the number of nodes increases. The authors propose a domain-decomposition-based distributed training and inference method for message-passing neural networks (MPNN). This scalable solution, called DS-MPNN, uses Nyström-approximation sampling techniques to handle large datasets. Experimental results show that DS-MPNN achieves comparable accuracy to single-GPU implementations, handles significantly more nodes than the baseline model, and outperforms node-based graph convolution networks. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper makes it possible for computers to analyze very big graphs quickly and accurately. Graphs are like maps of relationships between things, and they’re used in many areas, such as social media or transportation planning. The problem is that as the number of nodes (things) increases, it takes a long time for the computer to do calculations on these graphs. The authors found a way to break down the big graph into smaller pieces, process each piece separately, and then put everything back together again. This new method, called DS-MPNN, is much faster than previous methods and can handle huge datasets. |
Keywords
* Artificial intelligence * Inference