Loading Now

Summary of Federated Learning with Differential Privacy, by Adrien Banse et al.


Federated Learning with Differential Privacy

by Adrien Banse, Jan Kreischer, Xavier Oliva i Jürgens

First submitted to arxiv on: 3 Feb 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Distributed, Parallel, and Cluster Computing (cs.DC)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper investigates federated learning, a distributed machine learning approach that preserves clients’ private data. While FL achieves this by not sharing individual data, it’s found that analyzing uploaded parameter weights can still reveal sensitive information. To mitigate this risk, the authors explore the impact of adding differential privacy (DP) mechanisms to the model. They conduct an empirical benchmark on different datasets and client numbers, revealing a significant decrease in performance when using small, non-i.i.d datasets in a distributed and differentially private setting.
Low GrooveSquid.com (original content) Low Difficulty Summary
Federated learning is a way for many devices to learn together without sharing their personal information. Even with this protection, some details can still be hidden by looking at the models they use. Researchers looked at how adding extra security measures affects the model’s performance on different types of data. They found that small datasets and non-organized data have the biggest drop in performance when working together while keeping things private.

Keywords

* Artificial intelligence  * Federated learning  * Machine learning