Loading Now

Summary of Federated Unlearning with Gradient Descent and Conflict Mitigation, by Zibin Pan et al.


Federated Unlearning with Gradient Descent and Conflict Mitigation

by Zibin Pan, Zhichao Wang, Chi Li, Kaiyan Zheng, Boqi Wang, Xiaoying Tang, Junhua Zhao

First submitted to arxiv on: 28 Dec 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Cryptography and Security (cs.CR); Distributed, Parallel, and Cluster Computing (cs.DC)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper addresses a crucial challenge in Federated Learning (FL), where the global model can inadvertently retain client data, compromising privacy. To tackle this issue, the authors propose Federated Unlearning with Orthogonal Steepest Descent (FedOSD). This method leverages an unlearning Cross-Entropy loss to overcome convergence issues and calculates a steepest descent direction that minimizes conflicts between clients’ gradients. The approach efficiently reduces model utility while maintaining unlearning effectiveness. Experimental results demonstrate FedOSD’s superiority over state-of-the-art FU algorithms in terms of both unlearning and model utility.
Low GrooveSquid.com (original content) Low Difficulty Summary
Federated Learning is a way for devices to learn together without sharing their data. But, even if the devices don’t share their data, the global model can still remember what they’ve learned. This makes it hard to delete specific information from the model, which is important for privacy reasons. The authors of this paper came up with a new method called Federated Unlearning (FU) that tries to remove unwanted data without having to retrain the entire model. However, FU has some drawbacks, like reducing the model’s usefulness and making it hard to recover. To fix these issues, they developed a new approach called FedOSD, which uses a special loss function and calculates a direction for unlearning in a way that minimizes conflicts with other devices’ data.

Keywords

» Artificial intelligence  » Cross entropy  » Federated learning  » Loss function