Loading Now

Summary of Language Model Developers Should Report Train-test Overlap, by Andy K Zhang et al.


Language model developers should report train-test overlap

by Andy K Zhang, Kevin Klyman, Yifan Mai, Yoav Levine, Yian Zhang, Rishi Bommasani, Percy Liang

First submitted to arxiv on: 10 Oct 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Computers and Society (cs.CY); Software Engineering (cs.SE)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Language models are extensively evaluated, but correctly interpreting evaluation results requires knowledge of train-test overlap, which refers to the extent to which the language model is trained on the very data it’s being tested on. The public currently lacks adequate information about train-test overlap: most models have no public train-test overlap statistics, and third parties cannot directly measure train-test overlap since they don’t have access to the training data. This paper documents the practices of 30 model developers, finding that just 9 developers report train-test overlap. To increase transparency, the authors take the position that language model developers should publish train-test overlap statistics and/or training data whenever they report evaluation results on public test sets.
Low GrooveSquid.com (original content) Low Difficulty Summary
Language models are tested to see how well they work. But to understand these tests, you need to know if the model was trained using some of the same data it’s being tested with. This information is important because it affects how trustworthy the test results are. Unfortunately, most model developers don’t provide this information, making it hard for others to judge their models fairly. To fix this problem, the authors talked to 30 model developers and found that only a few of them share this information. They’re recommending that all model developers be more open about how they train their models.

Keywords

* Artificial intelligence  * Language model