Loading Now

Summary of Potion: Towards Poison Unlearning, by Stefan Schoepf et al.


Potion: Towards Poison Unlearning

by Stefan Schoepf, Jack Foster, Alexandra Brintrup

First submitted to arxiv on: 13 Jun 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The abstract discusses the threat of adversarial attacks on machine learning systems, where attackers introduce poison triggers into training datasets. The challenge is to remove these triggers from already trained models with only a subset of the poisoned data available. Previous methods have shown limited success, and even full retraining cannot address this issue. The paper proposes two solutions: a novel outlier-resistant method based on Selective Synaptic Dampening (SSD) that improves model protection and unlearning performance, and Poison Trigger Neutralisation (PTN) search, which finds suitable hyperparameters for settings where the forget set size is unknown and the retain set is contaminated. The contributions are benchmarked using ResNet-9 on CIFAR10 and WideResNet-28×10 on CIFAR100, showing that the method heals 93.72% of poison compared to SSD with 83.41% and full retraining with 40.68%, while lowering the average model accuracy drop caused by unlearning from 5.68% (SSD) to 1.41%.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper discusses how to remove poison triggers from machine learning models when only some of the poisoned data is known. This is a challenging problem because previous methods have not been very successful. The researchers propose two new solutions: one that uses an existing method called Selective Synaptic Dampening (SSD) and makes it more effective, and another that finds the right settings for the model to remove the poison triggers. They tested these solutions on some well-known datasets and found that they were able to remove most of the poison triggers.

Keywords

» Artificial intelligence  » Machine learning  » Resnet