Summary of Amuro and Char: Analyzing the Relationship Between Pre-training and Fine-tuning Of Large Language Models, by Kaiser Sun et al.
Amuro and Char: Analyzing the Relationship between Pre-Training and Fine-Tuning of Large Language Models
by Kaiser Sun, Mark Dredze
First submitted to arxiv on: 13 Aug 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper investigates the relationship between pre-training and fine-tuning of large language models. Specifically, it examines the impact of fine-tuning multiple intermediate pre-trained model checkpoints on their performance. The results, demonstrated across 18 datasets, suggest that continual pre-training improves the model in a latent way that is revealed after fine-tuning, and that extra fine-tuning benefits some datasets more than others. Additionally, the study finds that while supervised fine-tuning significantly improves the model’s performance, it can also cause forgetting of previously known domain knowledge and tasks not seen during fine-tuning. Furthermore, the model shows high sensitivity to evaluation prompts after supervised fine-tuning, but this sensitivity can be alleviated by more pre-training. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The paper looks at how language models are trained. It’s like a recipe for making the best possible model. They tried different ways of training and tested them on lots of datasets. The results show that the way they train the model matters. If they do it in stages, the model gets better and better. But if they only fine-tune it once, some things get forgotten. This is important because language models are used for many tasks, like translating languages or understanding what people mean. |
Keywords
» Artificial intelligence » Fine tuning » Supervised