Loading Now

Summary of Can Graph Neural Networks Learn Language with Extremely Weak Text Supervision?, by Zihao Li et al.


Can Graph Neural Networks Learn Language with Extremely Weak Text Supervision?

by Zihao Li, Lecheng Zheng, Bowen Jin, Dongqi Fu, Baoyu Jing, Yikun Ban, Jingrui He, Jiawei Han

First submitted to arxiv on: 11 Dec 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Social and Information Networks (cs.SI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper proposes a novel approach to adapting Graph Neural Networks (GNNs) to downstream tasks using Contrastive Language-Image Pre-training (CLIP). The authors leverage multi-modal prompt learning to effectively adapt pre-trained GNNs to new datasets and tasks, given only a few semantically labeled samples. This is achieved by embedding graphs directly in the same space as Large Language Models (LLMs), while keeping pre-trained models frozen to reduce the number of learnable parameters. The approach is evaluated on real-world datasets and demonstrates superior performance in few-shot, multi-task-level, and cross-domain settings.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper helps build better Graph Neural Networks by using a special kind of training that connects text and images together. This allows GNNs to work with very little data and learn new tasks quickly. The authors use a method called multi-modal prompt learning, which learns both graph prompts and text prompts at the same time. They test their approach on real-world datasets and show it works well in different scenarios.

Keywords

» Artificial intelligence  » Embedding  » Few shot  » Multi modal  » Multi task  » Prompt