Loading Now

Summary of Gi-smn: Gradient Inversion Attack Against Federated Learning Without Prior Knowledge, by Jin Qian and Kaimin Wei and Yongdong Wu and Jilian Zhang and Jipeng Chen and Huan Bao


GI-SMN: Gradient Inversion Attack against Federated Learning without Prior Knowledge

by Jin Qian, Kaimin Wei, Yongdong Wu, Jilian Zhang, Jipeng Chen, Huan Bao

First submitted to arxiv on: 6 May 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed Gradient Inversion attack based on Style Migration Network (GI-SMN) breaks through the strong assumptions made by previous gradient inversion attacks, enabling the reconstruction of user data with high similarity in batches. The optimization space is reduced by refining the latent code and using regular terms to facilitate gradient matching. GI-SMN outperforms state-of-the-art gradient inversion attacks in both visual effect and similarity metrics, overcoming gradient pruning and differential privacy defenses.
Low GrooveSquid.com (original content) Low Difficulty Summary
Federated learning helps keep user data private by sharing gradient information instead of actual data. However, some attacks can recreate the original data from those gradients. This paper shows that recent attacks are too weak to work in real-life situations because they rely on unrealistic assumptions. To fill this gap, the authors propose a new way to attack federated learning called GI-SMN. It’s better than previous methods at recreating user data and can even get past some defenses like removing parts of the model or adding noise.

Keywords

» Artificial intelligence  » Federated learning  » Optimization  » Pruning