Loading Now

Summary of Towards Robust Evaluation Of Unlearning in Llms Via Data Transformations, by Abhinav Joshi and Shaswati Saha and Divyaksh Shukla and Sriram Vema and Harsh Jhamtani and Manas Gaur and Ashutosh Modi


Towards Robust Evaluation of Unlearning in LLMs via Data Transformations

by Abhinav Joshi, Shaswati Saha, Divyaksh Shukla, Sriram Vema, Harsh Jhamtani, Manas Gaur, Ashutosh Modi

First submitted to arxiv on: 23 Nov 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI); Computers and Society (cs.CY); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper explores the robustness of existing Machine Unlearning (MUL) techniques for Large Language Models (LLMs). MUL aims to allow LLMs to forget certain information, like personally identifiable information (PII), without impacting their performance on regular tasks. The study focuses on the effect of data transformation on forgetting, specifically whether an unlearned LLM can recall forgotten information if the input format changes. The findings on the TOFU dataset highlight the importance of using diverse data formats to quantify unlearning in LLMs more reliably.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper looks at how well existing ways to “unlearn” language models work when they’re trained on lots of text from different sources. Even though people try to remove bad information before training, sometimes it still gets included, like personal details. To fix this, researchers are working on ways to make language models forget unwanted info without hurting their ability to do other tasks. This study checks how well these methods work when the way the input data is formatted changes. It uses a special dataset called TOFU and shows that using different formats helps us understand if the unlearning was successful or not.

Keywords

» Artificial intelligence  » Recall