Loading Now

Summary of Text-free Multi-domain Graph Pre-training: Toward Graph Foundation Models, by Xingtong Yu et al.


Text-Free Multi-domain Graph Pre-training: Toward Graph Foundation Models

by Xingtong Yu, Chang Zhou, Yuan Fang, Xinming Zhang

First submitted to arxiv on: 22 May 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper proposes MDGPT, a text-free framework for pre-training and adapting graph foundation models on diverse domains. The authors aim to overcome the challenge of aligning graphs with different characteristics. They introduce domain tokens to align features across source domains and dual prompts to adapt the target domain with unified multi-domain knowledge. The approach is evaluated on six public datasets, achieving up to 37.9% better performance than prior art.
Low GrooveSquid.com (original content) Low Difficulty Summary
Graphs are everywhere! This paper asks if we can train a graph model that works well for many different types of graphs from various domains. One big problem is that graphs from different areas have very different characteristics. Some previous attempts tried to use text descriptions to connect these graphs, but this only works for text-attributed graphs. The authors propose MDGPT, a new way to pre-train and adapt graph models without using text. They create special tokens to help align features across domains and use prompts to tailor the model’s knowledge for each target domain. The approach is tested on six public datasets and does much better than previous methods.

Keywords

» Artificial intelligence