Loading Now

Summary of A New Federated Learning Framework Against Gradient Inversion Attacks, by Pengxin Guo et al.


A New Federated Learning Framework Against Gradient Inversion Attacks

by Pengxin Guo, Shuang Zeng, Wenhao Chen, Xiaodan Zhang, Weihong Ren, Yuyin Zhou, Liangqiong Qu

First submitted to arxiv on: 10 Dec 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Cryptography and Security (cs.CR)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The Federated Learning (FL) framework, designed to protect data privacy by allowing clients to train machine learning models collectively without sharing raw data, is vulnerable to Gradient Inversion Attacks (GIA). To address this issue, various privacy-preserving methods have been integrated into FL, such as Secure Multi-party Computing (SMC), Homomorphic Encryption (HE), and Differential Privacy (DP). These approaches, however, inherently involve significant privacy-utility trade-offs. This paper proposes a novel framework, Hypernetwork Federated Learning (HyperFL), which breaks the direct connection between shared parameters and local private data to defend against GIA. HyperFL utilizes hypernetworks to generate model parameters, uploading only hypernetwork parameters for aggregation. Theoretical analyses demonstrate the convergence rate of HyperFL, while experimental results show its privacy-preserving capabilities and comparable performance.
Low GrooveSquid.com (original content) Low Difficulty Summary
Federated Learning (FL) is a way for machines to learn together without sharing their personal data. However, this method has a problem – someone can steal the information shared during training by using an attack called Gradient Inversion Attack (GIA). To fix this issue, people have been trying different methods to keep data private while still allowing FL to work. These methods are good but they make it harder for machines to learn new things. This paper presents a new way to keep data private during FL that is more effective and efficient. It uses something called hypernetworks to generate the parameters needed for training, and only shares those parameters with others. This makes it much harder for attackers to steal the information. The authors tested their method and found that it works well.

Keywords

» Artificial intelligence  » Federated learning  » Machine learning