Summary of Community-invariant Graph Contrastive Learning, by Shiyin Tan et al.
Community-Invariant Graph Contrastive Learning
by Shiyin Tan, Dongyuan Li, Renhe Jiang, Ying Zhang, Manabu Okumura
First submitted to arxiv on: 2 May 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Social and Information Networks (cs.SI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This research proposes a community-invariant graph contrastive learning (GCL) framework that maintains graph community structure during learnable graph augmentation. The authors investigate the role of graph community in graph augmentation, recognizing its crucial advantage for learnable graph augmentation. The proposed framework unifies topology and feature augmentation by maximizing spectral changes, enhancing model robustness. Experimental results on 21 benchmark datasets demonstrate the merits of this approach. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper is about finding a better way to make computer models learn about graphs. Graphs are like pictures that show connections between things. Right now, most ways of making these models learn from graphs are pretty limited. They either focus on the shape of the graph or what’s inside it, but not both at the same time. This makes them not very good at dealing with noisy data. The researchers in this paper found a way to make these models better by preserving the community structure of the graph. This means they can learn from graphs that have noise and still get accurate results. |