Summary of Llm Dataset Inference: Did You Train on My Dataset?, by Pratyush Maini and Hengrui Jia and Nicolas Papernot and Adam Dziedzic
LLM Dataset Inference: Did you train on my dataset?
by Pratyush Maini, Hengrui Jia, Nicolas Papernot, Adam Dziedzic
First submitted to arxiv on: 10 Jun 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Computation and Language (cs.CL); Cryptography and Security (cs.CR)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary A new method for identifying the datasets used to train large language models (LLMs) is proposed in this paper, which addresses the issue of copyright cases against companies that have trained their models on unlicensed data from the internet. The authors demonstrate that previous methods for membership inference attacks (MIAs), which aim to determine if a given text sequence was part of an LLM’s training data, are confounded by selecting non-members from a different distribution. They show that most MIAs perform no better than random guessing when discriminating between members and non-members from the same distribution. Instead, they propose a new dataset inference method that selectively combines MIAs to provide positive signal for a given distribution, and aggregates them to perform a statistical test on a given dataset. The authors demonstrate the effectiveness of their approach in distinguishing the train and test sets of different subsets of the Pile with statistically significant p-values < 0.1, without any false positives. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Large language models are being used more and more in everyday life, but this has also led to a rise in copyright cases against companies that have trained their models on unlicensed data from the internet. To help solve this problem, researchers have been working on ways to figure out if individual text sequences were part of an LLM’s training data. However, these methods are not as effective as they seem because they often compare text sequences from different times or places. The authors of this paper propose a new way to identify the datasets used to train LLMs that works better than previous methods. |
Keywords
» Artificial intelligence » Inference