Loading Now

Summary of Mofo: Momentum-filtered Optimizer For Mitigating Forgetting in Llm Fine-tuning, by Yupeng Chen et al.


MoFO: Momentum-Filtered Optimizer for Mitigating Forgetting in LLM Fine-Tuning

by Yupeng Chen, Senmiao Wang, Zhihang Lin, Zeyu Qin, Yushun Zhang, Tian Ding, Ruoyu Sun

First submitted to arxiv on: 30 Jul 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed Momentum-Filtered Optimizer (MoFO) algorithm is a new approach to fine-tuning large language models (LLMs). During the pre-training stage, LLMs acquire general knowledge that can be lost during subsequent fine-tuning. MoFO addresses this issue by iteratively selecting and updating model parameters with the largest momentum magnitudes. This method achieves similar performance to full-parameter training while keeping parameters closer to the pre-trained model, reducing knowledge forgetting. Unlike existing methods, MoFO does not require access to pre-training data, making it suitable for fine-tuning open-source LLMs without access to pre-training data. Additionally, MoFO does not alter the original loss function, which could avoid impairing performance on fine-tuning tasks. The algorithm is validated through rigorous convergence analysis and extensive experiments, demonstrating its superiority in mitigating forgetting and enhancing fine-tuning performance.
Low GrooveSquid.com (original content) Low Difficulty Summary
MoFO is a new way to fine-tune big language models. These models learn lots of things during training, but sometimes they forget what they learned before. MoFO helps keep the knowledge by choosing the most important parts of the model to update. This method works just as well as updating everything, and it keeps the model closer to its original state. The best part is that it doesn’t need the old data from when the model was trained, which makes it helpful for fine-tuning models without access to that information. MoFO also leaves the way the model learns things the same, so it won’t mess up how well the model does on new tasks.

Keywords

» Artificial intelligence  » Fine tuning  » Loss function