Loading Now

Summary of Can Gnn Be Good Adapter For Llms?, by Xuanwen Huang et al.


Can GNN be Good Adapter for LLMs?

by Xuanwen Huang, Kaiqiao Han, Yang Yang, Dezheng Bao, Quanjin Tao, Ziwei Chai, Qi Zhu

First submitted to arxiv on: 20 Feb 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper explores the application of large language models (LLMs) to model text-attributed graphs (TAGs). TAGs are a type of graph that combines textual data with node features, and have applications in social media and recommendation systems. The authors propose GraphAdapter, a framework that uses a graph neural network (GNN) as an adapter for LLMs to efficiently tackle TAGs. GraphAdapter introduces only a few trainable parameters and can be trained with low computation costs using auto-regression on node text. Once trained, the model can be fine-tuned with task-specific prompts for various downstream tasks. The authors demonstrate the effectiveness of GraphAdapter by training it on multiple real-world TAGs and achieving an average improvement of approximately 5% in node classification.
Low GrooveSquid.com (original content) Low Difficulty Summary
Large language models (LLMs) are very good at understanding text and can learn new things without being trained directly on that information. Text-attributed graphs (TAGs) are a special type of graph that uses text to describe nodes and tasks. The paper proposes a way to use LLMs with TAGs, called GraphAdapter. This framework is efficient because it only needs a few parameters to be adjusted during training and can learn quickly. Once trained, GraphAdapter can be fine-tuned for specific tasks by giving it hints about what to do.

Keywords

» Artificial intelligence  » Classification  » Gnn  » Graph neural network  » Regression