Loading Now

Summary of A Survey Of Deep Graph Learning Under Distribution Shifts: From Graph Out-of-distribution Generalization to Adaptation, by Kexin Zhang et al.


A Survey of Deep Graph Learning under Distribution Shifts: from Graph Out-of-Distribution Generalization to Adaptation

by Kexin Zhang, Shuhan Liu, Song Wang, Weili Shi, Chen Chen, Pan Li, Sheng Li, Jundong Li, Kaize Ding

First submitted to arxiv on: 25 Oct 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
A recent surge in research on graph machine learning under distribution shifts aims to train models for satisfactory performance on out-of-distribution (OOD) test data. This paper provides an up-to-date and forward-looking review of deep graph learning under distribution shifts, covering three primary scenarios: graph OOD generalization, training-time graph OOD adaptation, and test-time graph OOD adaptation. The authors formally formulate the problems, discuss various types of distribution shifts that can affect graph learning, and categorize existing models based on a proposed taxonomy. The review also summarizes commonly used datasets in this research area to facilitate further investigation.
Low GrooveSquid.com (original content) Low Difficulty Summary
Graph machine learning is important for many applications, but it has a problem: when data changes, the model’s performance drops. This happens because the model was trained on one type of data and tested on another. Researchers are working to fix this by training models that can perform well even when the data changes. The paper looks at three ways to do this: generalizing to new types of data, adapting during training, or adapting just before testing.

Keywords

» Artificial intelligence  » Generalization  » Machine learning