Summary of Copybench: Measuring Literal and Non-literal Reproduction Of Copyright-protected Text in Language Model Generation, by Tong Chen et al.
CopyBench: Measuring Literal and Non-Literal Reproduction of Copyright-Protected Text in Language Model Generation
by Tong Chen, Akari Asai, Niloofar Mireshghallah, Sewon Min, James Grimmelmann, Yejin Choi, Hannaneh Hajishirzi, Luke Zettlemoyer, Pang Wei Koh
First submitted to arxiv on: 9 Jul 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper introduces CopyBench, a benchmark to evaluate the reproduction of copyright-protected content by language models (LMs). The authors assess both literal and non-literal similarities in LM generations using copyrighted fiction books as text sources. They find that larger models demonstrate significantly more copying, with literal copying rates increasing from 0.2% to 10.5% and non-literal copying from 2.3% to 5.9%. The authors also evaluate the effectiveness of current strategies for mitigating copying and show that training-time alignment can reduce literal copying but may increase non-literal copying, while inference-time mitigation methods primarily reduce literal but not non-literal copying. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper is about how well language models copy text from books. The authors want to know if the models are just copying words or also copying ideas and characters. They made a special test called CopyBench to see how well different models do this. They found that bigger models do more copying, but some ways of reducing copying might not work as well as others. |
Keywords
* Artificial intelligence * Alignment * Inference