Summary of Regurgitative Training: the Value Of Real Data in Training Large Language Models, by Jinghui Zhang et al.
Regurgitative Training: The Value of Real Data in Training Large Language Models
by Jinghui Zhang, Dandan Qiao, Mochen Yang, Qiang Wei
First submitted to arxiv on: 3 Jul 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI); Machine Learning (stat.ML)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper investigates the impact of “regurgitative training” on Large Language Models (LLMs), where a new LLM is trained using data generated by other LLMs. The authors find that regurgitative training significantly handicaps LLM performance, observed in both fine-tuning GPT-3.5 and training transformer models from scratch. They attribute this performance loss to higher error rates and lower lexical diversity in LLM-generated data compared to real data. To mitigate this effect, the authors propose three strategies: using data-driven metrics to order high-quality data, combining data from multiple LLMs, and training an AI detection classifier to differentiate between LLM- and human-generated data. While these strategies can improve regurgitative training performance, they highlight the importance of real, human-generated data in training LLMs. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The paper looks at what happens when a new language model is trained using data that was already generated by other models like it. The researchers found that this “regurgitative training” makes the new model perform worse than if it were trained using real human-generated data. They think this might be because the fake data has more errors and less variety in its words. To make regurgitative training better, they came up with three ideas: sorting the fake data by how good it is, mixing data from multiple models together, and creating a special tool to tell the difference between fake and real data. While these ideas can help a bit, the paper shows that real human-generated data is still the best way to train language models. |
Keywords
» Artificial intelligence » Fine tuning » Gpt » Language model » Transformer