Summary of Distribution Consistency Based Self-training For Graph Neural Networks with Sparse Labels, by Fali Wang et al.
Distribution Consistency based Self-Training for Graph Neural Networks with Sparse Labels
by Fali Wang, Tianxiang Zhao, Suhang Wang
First submitted to arxiv on: 18 Jan 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary Medium Difficulty summary: Few-shot node classification is a challenging task for Graph Neural Networks (GNNs) due to insufficient supervision and potential distribution shifts between labeled and unlabeled nodes. Self-training frameworks have emerged, leveraging unlabeled data by assigning pseudo-labels to selected nodes. However, current methods neglect the distribution shift between training and testing node sets, potentially amplifying it. This work proposes a novel Distribution-Consistent Graph Self-Training (DC-GST) framework to bridge this gap. It identifies informative nodes that redeem the distribution discrepancy as a differentiable optimization task. A distribution-shift-aware edge predictor is adopted to augment the graph and increase generalizability in assigning pseudo-labels. Our proposed method outperforms state-of-the-art baselines on four benchmark datasets. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Low Difficulty summary: This paper is about how to make Graph Neural Networks better at guessing what kind of node a new, unseen node might be. Right now, these networks have trouble when they don’t have enough information or when the patterns in the data change. The authors want to find a way to fix this by using extra unlabeled nodes to help train the network. They propose a new method that takes into account how the patterns in the data might change and uses it to make better predictions. |
Keywords
* Artificial intelligence * Classification * Few shot * Optimization * Self training