Summary of Negative As Positive: Enhancing Out-of-distribution Generalization For Graph Contrastive Learning, by Zixu Wang et al.
Negative as Positive: Enhancing Out-of-distribution Generalization for Graph Contrastive Learning
by Zixu Wang, Bingbing Xu, Yige Yuan, Huawei Shen, Xueqi Cheng
First submitted to arxiv on: 25 May 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary Medium Difficulty summary: Graph contrastive learning (GCL) has been instrumental in advancing graph pre-training, but its ability to generalize well beyond training data remains underexamined. The traditional InfoNCE optimization in GCL only considers negative samples, which widens the gap between domains and hinders out-of-distribution (OOD) performance. To address this limitation, we introduce “Negative as Positive”, a strategy that treats semantically similar cross-domain pairs as positive during training. Our experiments on diverse datasets demonstrate significant improvements in OOD generalization for GCL models. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Low Difficulty summary: Right now, there’s a way to train computers to understand graph structures called Graph Contrastive Learning (GCL). It’s been really helpful, but it has one major problem: it doesn’t do well when dealing with new data that is very different from what it learned on. To fix this issue, we came up with a new idea where we treat the most similar pairs of data as if they were meant to be together during training. We tested this approach on many datasets and found that it really helps GCL models handle new data better. |
Keywords
» Artificial intelligence » Generalization » Optimization