Loading Now

Summary of How to Synthesize Text Data Without Model Collapse?, by Xuekai Zhu et al.


How to Synthesize Text Data without Model Collapse?

by Xuekai Zhu, Daixuan Cheng, Hengli Li, Kaiyan Zhang, Ermo Hua, Xingtai Lv, Ning Ding, Zhouhan Lin, Zilong Zheng, Bowen Zhou

First submitted to arxiv on: 19 Dec 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper investigates the impact of synthetic data on language model training, particularly the phenomenon of “model collapse” where iterative training on self-generated data leads to a decline in performance. The authors find that using more synthetic data results in a negative correlation with model performance and uncover distributional shift phenomena and over-concentration of n-gram features. To address this issue, they propose token editing on human-produced data to obtain semi-synthetic data, which can prevent model collapse by constraining test error with a finite upper bound. The authors conduct extensive experiments on pre-training from scratch, continual pre-training, and supervised fine-tuning, demonstrating the effectiveness of their approach.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about how using fake data to train AI models affects their performance. Researchers found that when they used more fake data, the models actually got worse. They also discovered some problems with the way fake data is generated and proposed a new way to make it better. This could help improve AI model performance in the future.

Keywords

» Artificial intelligence  » Fine tuning  » Language model  » N gram  » Supervised  » Synthetic data  » Token