Summary of A Tale Of Tails: Model Collapse As a Change Of Scaling Laws, by Elvis Dohmatob et al.
A Tale of Tails: Model Collapse as a Change of Scaling Laws
by Elvis Dohmatob, Yunzhen Feng, Pu Yang, Francois Charton, Julia Kempe
First submitted to arxiv on: 10 Feb 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI); Computation and Language (cs.CL)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This research paper investigates how neural scaling laws will adapt as AI models grow in size, considering the increasing presence of synthetic data in training corpora. The authors question whether future models will continue to improve or collapse due to this shift. They develop a theoretical framework for model collapse through scaling laws and discover various decay phenomena, including loss of scaling, shifted scaling with generations, skill “un-learning,” and grokking when combining human and synthesized data. Experimental results validate the theory using a transformer on an arithmetic task and text generation with the Llama2 large language model. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary As AI models get bigger, scientists are trying to understand how they will improve as we add more training data. But what if some of that data is fake? Will our models still learn or will they get worse? This research explores this question and finds that there are many ways for big models to decline when mixed with human-made data. The authors did experiments using a large language model and found that their theory matches the real-world results. |
Keywords
* Artificial intelligence * Large language model * Scaling laws * Synthetic data * Text generation * Transformer