Loading Now

Summary of Federated Unlearning Model Recovery in Data with Skewed Label Distributions, by Xinrui Yu et al.


Federated Unlearning Model Recovery in Data with Skewed Label Distributions

by Xinrui Yu, Wenbin Pei, Bing Xue, Qiang Zhang

First submitted to arxiv on: 18 Dec 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
A novel approach to federated learning, called federated unlearning, is introduced. This technique provides clients with a rollback mechanism, allowing them to withdraw their data contribution without retraining from scratch. However, existing research has overlooked scenarios with skewed label distributions, leading to biased models and complicating the recovery process. To address this issue, a new method for recovering federated unlearning with skewed label distributions is proposed. This involves oversampling deep learning strategies to supplement skewed class data, followed by density-based denoising to remove noise from generated data. The remaining clients then leverage enhanced local datasets and engage in iterative training to restore the performance of the unlearning model. Experimental evaluations on various federated learning datasets with different degrees of skewness demonstrate that this method outperforms baseline methods in terms of accuracy, particularly for the skewed class.
Low GrooveSquid.com (original content) Low Difficulty Summary
Federated learning is a way for many devices to work together and learn from each other’s data. Sometimes, one device might want to remove its own data contribution without relearning everything from scratch. This paper looks at how to make this process better when the data is not equally divided among the devices. The current methods don’t work well in these situations because they can create biased models that are not good for everyone. To solve this problem, the researchers came up with a new approach that uses oversampling and denoising techniques. They tested their method on several different datasets and found that it worked better than other methods at restoring the performance of the unlearning model.

Keywords

» Artificial intelligence  » Deep learning  » Federated learning