Summary of How Much Can We Forget About Data Contamination?, by Sebastian Bordt et al.
How Much Can We Forget about Data Contamination?
by Sebastian Bordt, Suraj Srinivas, Valentyn Boreiko, Ulrike von Luxburg
First submitted to arxiv on: 4 Oct 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI); Computation and Language (cs.CL)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper investigates the impact of benchmark contamination on the evaluation of Large Language Models (LLMs). The authors challenge the assumption that small-scale contamination renders benchmark evaluations invalid. They experimentally quantify the magnitude of overfitting based on scaling parameters, finding that even minor contamination can lead to overfitting at certain scales. However, they also show that if the training data is scaled beyond a certain point, the model can “forget” previous examples and contamination becomes less significant. The authors confirm these results using continual pre-training and study the impact of weight decay on example forgetting. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper helps us understand how to properly evaluate large language models. It shows that even small amounts of old data can affect how well a model is doing, but if we give it a lot more new data, this effect goes away. The authors also studied what happens when they make the model forget its old knowledge, and found that many big models have already forgotten some of their early training. |
Keywords
» Artificial intelligence » Overfitting