Loading Now

Summary of Sketch-gnn: Scalable Graph Neural Networks with Sublinear Training Complexity, by Mucong Ding et al.


Sketch-GNN: Scalable Graph Neural Networks with Sublinear Training Complexity

by Mucong Ding, Tahseen Rabbani, Bang An, Evan Z Wang, Furong Huang

First submitted to arxiv on: 21 Jun 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Machine Learning (stat.ML)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed sketch-based algorithm for Graph Neural Networks (GNNs) tackles the issue of scaling up graph sizes by training atop compact sketches of graph adjacency and node embeddings. This approach eliminates the linear dependence on graph size, making it more efficient than existing methods that sample or use historical embeddings. The framework utilizes polynomial tensor-sketch (PTS) theory to sketch non-linear activations and graph convolution matrices in GNNs. Additionally, a locality-sensitive hashing (LSH) technique is developed to improve the quality of sketches. Experimental results on large-graph benchmarks demonstrate the scalability and competitive performance of Sketch-GNNs versus their full-size GNN counterparts.
Low GrooveSquid.com (original content) Low Difficulty Summary
GNNs are used for tasks like node classification, but they struggle when dealing with large graphs. To fix this, researchers came up with ways to sample or use old information. But these methods still had a big problem: they relied on the size of the graph growing linearly. This paper proposes a new method that uses sketches to represent the graph and train GNNs. This makes training faster and more efficient. The approach is based on special math called polynomial tensor-sketch (PTS) theory, which helps with sketching non-linear parts of GNNs. To make the sketches even better, they developed a technique called locality-sensitive hashing (LSH). Tests show that this new method works well and can handle large graphs.

Keywords

* Artificial intelligence  * Classification  * Gnn