Summary of Fast Track to Winning Tickets: Repowering One-shot Pruning For Graph Neural Networks, by Yanwei Yue et al.
Fast Track to Winning Tickets: Repowering One-Shot Pruning for Graph Neural Networks
by Yanwei Yue, Guibin Zhang, Haoran Yang, Dawei Cheng
First submitted to arxiv on: 10 Dec 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: None
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed Graph Lottery Hypothesis (GLT) aims to identify winning tickets without compromising performance in graph neural networks (GNNs). Current GLT methods rely on iterative magnitude pruning (IMP), which offers better performance and stability but is computationally expensive. The paper reevaluates the correlation between one-shot pruning and IMP, finding that while one-shot tickets are suboptimal, they provide a “fast track” to stronger-performing tickets. A new framework for one-shot pruning and denoising is introduced to validate this approach. Compared to IMP-based methods, the proposed framework achieves higher sparsity and faster speeds, demonstrating significant improvements in weight and graph sparsity, as well as speedup and MAC savings. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper studies how to improve the performance of Graph Neural Networks (GNNs) on large-scale graphs. GNNs are very good at learning from graph data, but they can be slow and take a lot of computer power when used with big datasets. Researchers have proposed an idea called the “Graph Lottery Hypothesis” that says we should look for small parts of the network (called subgraphs) that do most of the work. This helps make the network faster and more efficient. The authors of this paper tested different ways to find these winning tickets, or “subnetworks”, and found a new way that is even better than what others have done before. |
Keywords
» Artificial intelligence » One shot » Pruning