Summary of Attack and Reset For Unlearning: Exploiting Adversarial Noise Toward Machine Unlearning Through Parameter Re-initialization, by Yoonhwa Jung and Ikhyun Cho and Shun-hsiang Hsu and Julia Hockenmaier
Attack and Reset for Unlearning: Exploiting Adversarial Noise toward Machine Unlearning through Parameter Re-initialization
by Yoonhwa Jung, Ikhyun Cho, Shun-Hsiang Hsu, Julia Hockenmaier
First submitted to arxiv on: 17 Jan 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Cryptography and Security (cs.CR); Computer Vision and Pattern Recognition (cs.CV)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary Machine learning has faced growing concerns about privacy and regulatory compliance, leading to the concept of machine unlearning. This approach aims to selectively forget or erase specific learned information from trained models. Our novel algorithm, Attack-and-Reset for Unlearning (ARU), uses adversarial noise to create a parameter mask, effectively resetting certain parameters and making them unlearnable. ARU outperforms current state-of-the-art results on two facial machine-unlearning benchmark datasets, MUFAC and MUCAC. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Machine learning is important because it helps computers learn from data. But sometimes we need to make sure that the computer forgets certain things it learned. This is called machine unlearning. Our team developed a new way to do this called ARU (Attack-and-Reset for Unlearning). It uses special noise to change some of the computer’s settings so it can’t use what it learned before. We tested ARU and it worked better than other methods on two big datasets. |
Keywords
* Artificial intelligence * Machine learning * Mask