Loading Now

Summary of Linkgpt: Teaching Large Language Models to Predict Missing Links, by Zhongmou He et al.


by Zhongmou He, Jing Zhu, Shengyi Qian, Joyce Chai, Danai Koutra

First submitted to arxiv on: 7 Jun 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed study focuses on the application of Large Language Models (LLMs) to graph-based tasks, particularly link prediction (LP), on Text-Attributed Graphs (TAGs). The objective is to leverage LLMs to predict missing links between nodes in a graph, evaluating an LLM’s ability to reason over structured data and infer new facts based on learned patterns. To address the challenges of effectively integrating pairwise structural information into LLMs and solving computational bottlenecks, the authors propose LinkGPT, an end-to-end trained LLM for LP tasks. The approach consists of two stages: fine-tuning the pairwise encoder, projector, and node projector, followed by further fine-tuning to predict links. Additionally, a retrieval-reranking scheme is introduced at inference time to improve efficiency while maintaining high LP accuracy. Experimental results show that LinkGPT achieves state-of-the-art performance on real-world graphs and superior generalization in zero-shot and few-shot learning.
Low GrooveSquid.com (original content) Low Difficulty Summary
Link prediction (LP) on Text-Attributed Graphs (TAGs) uses Large Language Models (LLMs) to guess missing links between nodes. Researchers have mainly focused on node classification, but LP remains understudied. This study proposes a new task: using LLMs for link prediction. The goal is to see how well LLMs can understand structured data and make predictions based on patterns learned. To tackle this challenge, the authors designed LinkGPT, an end-to-end trained model that learns from examples and makes predictions. The results show that LinkGPT is really good at LP and can even predict links it has never seen before.

Keywords

» Artificial intelligence  » Classification  » Encoder  » Few shot  » Fine tuning  » Generalization  » Inference  » Zero shot