Summary of Two Heads Are Better Than One: Boosting Graph Sparse Training Via Semantic and Topological Awareness, by Guibin Zhang et al.
Two Heads Are Better Than One: Boosting Graph Sparse Training via Semantic and Topological Awareness
by Guibin Zhang, Yanwei Yue, Kun Wang, Junfeng Fang, Yongduo Sui, Kai Wang, Yuxuan Liang, Dawei Cheng, Shirui Pan, Tianlong Chen
First submitted to arxiv on: 2 Feb 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: None
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed Graph Sparse Training (GST) approach addresses the computational challenges of applying Graph Neural Networks (GNNs) to large-scale graphs by dynamically manipulating sparsity at the data level. GST constructs a topology & semantic anchor at a low training cost, followed by performing dynamic sparse training to align the sparse graph with the anchor, guided by the Equilibria Sparsification Principle. This approach produces a sparse graph with maximum topological integrity and no performance degradation. Experimental results on 6 datasets and 5 backbones demonstrate GST’s ability to identify subgraphs at higher graph sparsity levels, preserve key spectral properties, achieve significant speedup in GNN inference, and aid graph adversarial defense and lottery tickets. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Graph Neural Networks (GNNs) are really good at learning from graphs, but they can get slow when dealing with super big graphs. To fix this, scientists want to remove some of the connections between things on the graph so it’s easier for GNNs to work with. There are two main ways to do this: one that focuses on the shape of the graph and another that looks at what the nodes mean. But these methods have their own problems – the first one doesn’t work well with GNNs, while the second one gets worse when there’s less connections. The new approach, called Graph Sparse Training (GST), tries to find a balance between keeping the important parts of the graph and making it faster for GNNs. It does this by creating an anchor that shows what’s most important on the graph, then using that to decide which connections to keep or remove. This makes the graph smaller and faster, without losing any important information. |
Keywords
* Artificial intelligence * Gnn * Inference