Summary of The Wmdp Benchmark: Measuring and Reducing Malicious Use with Unlearning, by Nathaniel Li et al.
The WMDP Benchmark: Measuring and Reducing Malicious Use With Unlearning
by Nathaniel Li, Alexander Pan, Anjali Gopal, Summer Yue, Daniel Berrios, Alice Gatti, Justin D. Li, Ann-Kathrin Dombrowski, Shashwat Goel, Long Phan, Gabriel Mukobi, Nathan Helm-Burger, Rassin Lababidi, Lennart Justen, Andrew B. Liu, Michael Chen, Isabelle Barrass, Oliver Zhang, Xiaoyuan Zhu, Rishub Tamirisa, Bhrugu Bharathi, Adam Khoja, Zhenqi Zhao, Ariel Herbert-Voss, Cort B. Breuer, Samuel Marks, Oam Patel, Andy Zou, Mantas Mazeika, Zifan Wang, Palash Oswal, Weiran Lin, Adam A. Hunt, Justin Tienken-Harder, Kevin Y. Shih, Kemper Talley, John Guan, Russell Kaplan, Ian Steneker, David Campbell, Brad Jokubaitis, Alex Levinson, Jean Wang, William Qian, Kallol Krishna Karmakar, Steven Basart, Stephen Fitz, Mindy Levine, Ponnurangam Kumaraguru, Uday Tupakula, Vijay Varadharajan, Ruoyu Wang, Yan Shoshitaishvili, Jimmy Ba, Kevin M. Esvelt, Alexandr Wang, Dan Hendrycks
First submitted to arxiv on: 5 Mar 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI); Computation and Language (cs.CL); Computers and Society (cs.CY)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper addresses the risks of large language models (LLMs) being used for malicious purposes, such as developing biological, cyber, and chemical weapons. To measure these risks, government institutions and AI labs are developing evaluations for hazardous capabilities in LLMs. However, current evaluations are private, preventing further research into mitigating risk. The authors publicly release the Weapons of Mass Destruction Proxy (WMDP) benchmark, a dataset of 3,668 multiple-choice questions that serve as a proxy measurement of hazardous knowledge in biosecurity, cybersecurity, and chemical security. WMDP was developed by a consortium of academics and technical consultants and was stringently filtered to eliminate sensitive information prior to public release. The authors also develop RMU, a state-of-the-art unlearning method based on controlling model representations that reduces model performance on WMDP while maintaining general capabilities. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The paper is about making sure big language models don’t get used for bad things like developing weapons. It’s hard to measure these risks because nobody shares their methods and datasets, so it’s hard to come up with ways to make them safer. To help solve this problem, the authors created a special set of questions called WMDP that can be used to test how well language models do on tasks related to biosecurity, cybersecurity, and chemical security. They also came up with a new way to “unlearn” what a language model knows so it’s not as good at doing bad things. |
Keywords
* Artificial intelligence * Language model