Loading Now

Summary of Upcycling Noise For Federated Unlearning, by Jianan Chen et al.


Upcycling Noise for Federated Unlearning

by Jianan Chen, Qin Hu, Fangtian Zhong, Yan Zhuang, Minghui Xu

First submitted to arxiv on: 7 Dec 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Cryptography and Security (cs.CR); Distributed, Parallel, and Cluster Computing (cs.DC)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper proposes a novel approach to Federated Learning (FL) with Differential Privacy (DP), addressing the emerging requirement of “the right to be forgotten” for clients. The authors introduce Federated Unlearning with Indistinguishability (FUI) to unlearn local data in DPFL, which consists of two steps: local model retraction and global noise calibration. This approach leverages the noise added by DPFL to achieve a certain level of indistinguishability after local model retraction, and then enhances it through global noise calibration. The authors also formulate a Stackelberg game to derive optimal unlearning strategies for both the server and target client. Experimental results on four real-world datasets demonstrate that FUI achieves better model performance and efficiency compared to mainstream FU schemes.
Low GrooveSquid.com (original content) Low Difficulty Summary
Federated Learning is a way for many devices to work together without sharing their private data. The authors want to make sure that this process is also private, so they’re adding something called Differential Privacy (DP). They realized that if someone wants to forget the data from a specific device, it’s not easy with DPFL because of the noise added by DP. So, they came up with a new method called Federated Unlearning with Indistinguishability (FUI) that can unlearn this data while keeping it private. This method has two steps: first, they retract the local model and then calibrate the global noise to make sure it’s not detectable. The authors also created a game-like strategy to decide how much should be done to unlearn the data without hurting the performance of the devices.

Keywords

» Artificial intelligence  » Federated learning