Loading Now

Summary of Towards Scalable Exact Machine Unlearning Using Parameter-efficient Fine-tuning, by Somnath Basu Roy Chowdhury et al.


Towards Scalable Exact Machine Unlearning Using Parameter-Efficient Fine-Tuning

by Somnath Basu Roy Chowdhury, Krzysztof Choromanski, Arijit Sehanobish, Avinava Dubey, Snigdha Chaturvedi

First submitted to arxiv on: 24 Jun 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The abstract presents machine unlearning as the process of efficiently removing the influence of training data instances from trained machine learning models without retraining them from scratch. A subclass of exact machine unlearning approaches focuses on techniques that guarantee the removal of a data instance’s influence from a model. Exact unlearning methods use models where individual components are trained on disjoint subsets of data, only retraining affected components during deletion. Existing approaches reduce retraining costs but can still be expensive for organizations to retrain model components, requiring system halting and service failure. To address these challenges, the authors introduce Sequence-aware Sharded Sliced Training (S3T), an exact unlearning framework designed to enhance deletion capabilities while minimizing impact on model performance. S3T uses a lightweight parameter-efficient fine-tuning approach for sequential training layers with disjoint data slices, enabling efficient unlearning by deactivating affected layers. To reduce retraining costs and improve model performance, the authors train models on multiple data sequences, handling increased deletion requests.
Low GrooveSquid.com (original content) Low Difficulty Summary
Machine learning is like teaching computers new things! Sometimes, we need to “forget” what we taught them earlier. This paper talks about how to do that efficiently without having to re-teach everything from scratch. It’s like deleting a photo from your phone – you don’t have to upload all your old photos again just because one might be gone! The authors introduce a new way to do this, called Sequence-aware Sharded Sliced Training (S3T), which is really good at “forgetting” without affecting the rest of what the computer knows.

Keywords

» Artificial intelligence  » Fine tuning  » Machine learning  » Parameter efficient