Summary of Towards Scalable and Deep Graph Neural Networks Via Noise Masking, by Yuxuan Liang et al.
Towards Scalable and Deep Graph Neural Networks via Noise Masking
by Yuxuan Liang, Wentao Zhang, Zeang Sheng, Ling Yang, Quanqing Xu, Jiawei Jiang, Yunhai Tong, Bin Cu
First submitted to arxiv on: 19 Dec 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The authors tackle the challenge of scaling Graph Neural Networks (GNNs) for large-scale graph mining tasks. They identify limitations in existing methods, which focus on model simplification but neglect data-centric issues like over-smoothing. The proposed random walk with noise masking (RMask) module enables deeper GNNs while preserving scalability and addressing these limitations. RMask is a plug-and-play module that eliminates noise within each propagation step, leading to improved performance-efficiency trade-offs. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Graph Neural Networks are great at solving many problems, but they can be slow on really big graphs. Researchers have tried simplifying the models to make them faster, but this hasn’t fully solved the problem. The authors of this paper found that there’s a new issue with these simplified models: as you go deeper into the model, it starts to lose information and get less accurate. They created a new module called RMask that helps fix this problem by getting rid of extra noise in the data. This makes the models faster and more accurate, which is really important for big datasets. |