Loading Now

Summary of Forget Vectors at Play: Universal Input Perturbations Driving Machine Unlearning in Image Classification, by Changchang Sun and Ren Wang and Yihua Zhang and Jinghan Jia and Jiancheng Liu and Gaowen Liu and Yan Yan and Sijia Liu


Forget Vectors at Play: Universal Input Perturbations Driving Machine Unlearning in Image Classification

by Changchang Sun, Ren Wang, Yihua Zhang, Jinghan Jia, Jiancheng Liu, Gaowen Liu, Yan Yan, Sijia Liu

First submitted to arxiv on: 21 Dec 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Computer Vision and Pattern Recognition (cs.CV)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper proposes a novel approach to machine unlearning (MU), which involves erasing the influence of specific unwanted data from already-trained models. The conventional methods are model-based, requiring retraining or fine-tuning the model’s weights. This work takes a different route by using input perturbation-based perspective, where the model weights remain intact. The authors introduce the concept of forget vector, a proactive input-based unlearning strategy that can be generated as an input-agnostic data perturbation. They also explore forget vector arithmetic, which allows combining class-specific forget vectors to generate new ones for unseen tasks. The paper conducts extensive experiments and shows competitive performance relative to state-of-the-art model-based methods.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper is about a new way to “forget” old information in already-trained models. This is important because some data regulations require models to be updated when certain data is no longer allowed. The authors came up with a new method that doesn’t need to retrain the whole model, but instead uses special inputs to make the model forget what it learned from unwanted data. They also found a way to combine these special inputs to adapt to new situations where the model needs to forget different things.

Keywords

» Artificial intelligence  » Fine tuning