Loading Now

Summary of Feduhb: Accelerating Federated Unlearning Via Polyak Heavy Ball Method, by Yu Jiang et al.


FedUHB: Accelerating Federated Unlearning via Polyak Heavy Ball Method

by Yu Jiang, Chee Wei Tan, Kwok-Yan Lam

First submitted to arxiv on: 17 Nov 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Distributed, Parallel, and Cluster Computing (cs.DC)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper proposes a novel approach to federated unlearning (FU), called FedUHB, which enables the efficient elimination of specific data influences from a shared model while preserving robust performance. By leveraging Polyak heavy ball optimization and introducing a dynamic stopping mechanism, FedUHB achieves exact unlearning with improved efficiency and reduced computational costs. The proposed method is demonstrated to be effective in federated learning settings, offering a valuable solution for addressing the growing demand for data removal upon request.
Low GrooveSquid.com (original content) Low Difficulty Summary
Federated learning lets different groups work together on a shared machine learning model without sharing their individual data. This helps keep private information safe. Sometimes, we need to “unlearn” or remove specific data from the model. The problem is that current methods don’t completely erase data influence and can be slow. To solve this issue, researchers developed a new approach called FedUHB. It uses a special optimization technique and stops unlearning when it’s done. This makes it faster and more efficient. The results show that FedUHB works well and can even preserve the model’s performance after unlearning.

Keywords

* Artificial intelligence  * Federated learning  * Machine learning  * Optimization