Loading Now

Summary of Learning on Graphs with Large Language Models(llms): a Deep Dive Into Model Robustness, by Kai Guo et al.


Learning on Graphs with Large Language Models(LLMs): A Deep Dive into Model Robustness

by Kai Guo, Zewen Liu, Zhikai Chen, Hongzhi Wen, Wei Jin, Jiliang Tang, Yi Chang

First submitted to arxiv on: 16 Jul 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Large Language Models (LLMs) have achieved impressive performance across various natural language processing tasks. Recently, LLMs-based pipelines have been developed to enhance learning on graphs with text attributes, demonstrating promising results. However, graphs are known to be susceptible to adversarial attacks, and it remains unclear whether LLMs exhibit robustness in learning on graphs. Our work aims to explore the potential of LLMs in addressing this gap by investigating their robustness against graph structural and textual perturbations. We find that both LLMs-as-Enhancers and LLMs-as-Predictors offer superior robustness compared to shallow models, making them promising approaches for learning on graphs.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about using special language models called Large Language Models (LLMs) to learn about graphs with words attached. Graphs are like networks that can be attacked, and it’s not clear if these LLMs can handle those attacks. The researchers looked at how well the LLMs worked against different kinds of attacks on the graphs. They found that these LLMs were better than other methods at handling these attacks.

Keywords

* Artificial intelligence  * Natural language processing