Loading Now

Summary of Sap: Corrective Machine Unlearning with Scaled Activation Projection For Label Noise Robustness, by Sangamesh Kodge et al.


SAP: Corrective Machine Unlearning with Scaled Activation Projection for Label Noise Robustness

by Sangamesh Kodge, Deepak Ravikumar, Gobinda Saha, Kaushik Roy

First submitted to arxiv on: 13 Mar 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Machine Learning (stat.ML)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
A novel machine unlearning algorithm called Scaled Activation Projection (SAP) is introduced to address label corruption in machine learning models. SAP uses Singular Value Decomposition (SVD) to identify trusted samples and project model weights onto a clean activation space, mitigating the impact of mislabeled training data. The algorithm demonstrates effectiveness on both synthetic and real-world label noise datasets, including CIFAR-10 with 25% corruption, achieving generalization improvements of up to 6%. SAP also outperforms noise-robust training approaches by an average of 3.2% on the CIFAR-10 dataset. Additionally, the algorithm shows a generalization improvement of 2.31% for Vision Transformer models trained on naturally corrupted Clothing1M.
Low GrooveSquid.com (original content) Low Difficulty Summary
A new way to make machine learning models work better when some of their training data is wrong is introduced. This problem, called label corruption, can happen when people without expertise annotate data or when bad actors try to trick the model. The new method, called Scaled Activation Projection (SAP), uses a mathematical technique called Singular Value Decomposition (SVD) to figure out which pieces of training data are correct and then adjusts the model’s behavior based on that. This makes the model perform better even when it has some bad information. The researchers tested SAP on several datasets, including one with artificial noise and one with real-world noise, and found that it worked well in both cases.

Keywords

* Artificial intelligence  * Generalization  * Machine learning  * Vision transformer