Summary of Gnnavi: Navigating the Information Flow in Large Language Models by Graph Neural Network, By Shuzhou Yuan et al.
GNNavi: Navigating the Information Flow in Large Language Models by Graph Neural Network
by Shuzhou Yuan, Ercong Nie, Michael Färber, Helmut Schmid, Hinrich Schütze
First submitted to arxiv on: 18 Feb 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper proposes a prompt-based parameter-efficient fine-tuning (PEFT) approach called GNNavi to enhance the adaptability of Large Language Models (LLMs) in low-data scenarios. The authors leverage insights into In-Context Learning’s (ICL) information flow dynamics, using a Graph Neural Network (GNN) layer to guide the processing of prompts and propagate information more effectively. Experimental results on text classification tasks with GPT-2 and Llama2 demonstrate that GNNavi surpasses standard prompt-based fine-tuning methods in few-shot settings by updating only 0.2% to 0.5% of parameters, outperforming existing PEFT approaches such as prefix tuning, LoRA, and Adapter. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper helps us understand how we can make language models better at learning new things. Right now, these models are really good at doing tasks when they’re given lots of examples to practice with. But what if we only have a few examples? That’s where this new way of fine-tuning the model comes in. It uses something called a graph neural network to help the model understand how to use the information it’s given. The result is that the model can learn even when it doesn’t have many examples, and it does so much more efficiently than other methods. |
Keywords
» Artificial intelligence » Few shot » Fine tuning » Gnn » Gpt » Graph neural network » Lora » Parameter efficient » Prompt » Text classification