Summary of Fine-tuning Can Help Detect Pretraining Data From Large Language Models, by Hengxiang Zhang et al.
Fine-tuning can Help Detect Pretraining Data from Large Language Models
by Hengxiang Zhang, Songxin Zhang, Bingyi Jing, Hongxin Wei
First submitted to arxiv on: 9 Oct 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed paper tackles the issue of detecting pretraining data in large language models (LLMs) by introducing a novel method called Fine-tuned Score Deviation (FSD). The authors explore the benefits of unseen data, which can be easily collected after the release of the LLM. They find that the perplexities of LLMs shift differently for members and non-members after fine-tuning with a small amount of previously unseen data. To improve the performance of current scoring functions, they propose measuring the deviation distance of current scores after fine-tuning on a small amount of unseen data within the same domain. This approach can significantly decrease the scores of all non-members, leading to a larger deviation distance than members. The paper demonstrates the effectiveness of FSD through extensive experiments on common benchmark datasets across various models. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The proposed paper solves a problem in large language models by creating a new way to detect pretraining data. Researchers have been concerned about fair evaluation and ethical risks because some models are trained on biased or unfair data. The authors found that giving small amounts of new, unseen data to a model makes it easier to tell if the data is from the same group as the original training data or not. They came up with a new method called Fine-tuned Score Deviation (FSD) that helps make this detection process better. By testing FSD on many models and datasets, they showed that it works well. |
Keywords
» Artificial intelligence » Fine tuning » Pretraining