Loading Now

Summary of Demystifying Language Model Forgetting with Low-rank Example Associations, by Xisen Jin et al.


Demystifying Language Model Forgetting with Low-rank Example Associations

by Xisen Jin, Xiang Ren

First submitted to arxiv on: 20 Jun 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Computation and Language (cs.CL); Machine Learning (stat.ML)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Large Language Models (LLMs) often forget upstream data when fine-tuned for new tasks. Despite efforts to mitigate this forgetting, few have investigated how forgotten upstream examples are connected to newly learned tasks. Understanding these dependencies enables targeted mitigation of forgetting. This paper empirically analyzes the forgetting that occurs in N upstream examples of language modeling or instruction-tuning after fine-tuning LLMs on one of M new tasks. The analysis shows that the forgetting can be well-approximated with low-rank matrices, indicating simple associations between learned tasks and forgotten upstream examples. Leveraging this insight, the paper predicts forgetting of upstream examples when fine-tuning on unseen tasks using matrix completion over empirical associations. This approach outperforms prior methods that learn semantic relationships between learned tasks and upstream examples with LMs for predicting forgetting. The authors demonstrate the practical utility of their analysis by showing statistically significantly reduced forgetting as they upweight predicted examples for replay at fine-tuning.
Low GrooveSquid.com (original content) Low Difficulty Summary
When we teach machines to do new things, they often forget what they already knew. This paper looks at how this forgetting happens and how it’s connected to the new tasks they’re learning. The researchers analyzed a big dataset of language models and found that the forgetting can be explained by simple connections between the learned tasks and forgotten examples. They used this insight to develop a way to predict which examples will be forgotten, and then tested their approach to see if it could help reduce forgetting. Their results show that their method is effective in reducing forgetting, making it useful for teaching machines new skills.

Keywords

» Artificial intelligence  » Fine tuning  » Instruction tuning