Loading Now

Summary of Private and Communication-efficient Federated Learning Based on Differentially Private Sketches, by Meifan Zhang et al.


Private and Communication-Efficient Federated Learning based on Differentially Private Sketches

by Meifan Zhang, Zhanhong Xie, Lihua Yin

First submitted to arxiv on: 8 Oct 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Cryptography and Security (cs.CR)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed federated learning (FL) method, DPSFL, addresses two primary challenges in FL: privacy leakage due to parameter sharing and communication inefficiencies. To achieve this, DPSFL utilizes differentially private sketches, which compress local gradients using a count sketch for improved communication efficiency while ensuring differential privacy (DP). A theoretical analysis of privacy and convergence is provided. However, gradient clipping, essential in DP learning, introduces bias into the gradients, negatively impacting FL performance. An adaptive clipping strategy is proposed to mitigate this impact. Experimental comparisons demonstrate the superiority of DPSFL concerning privacy preservation, communication efficiency, and model accuracy.
Low GrooveSquid.com (original content) Low Difficulty Summary
Federated learning (FL) is a way for devices to learn together without sharing their data. But it has some problems: when we share our models, our privacy might be at risk, and it takes a lot of time and energy to communicate between devices. To fix these issues, scientists created a new method called DPSFL. It uses special sketches that make sure our data is private while also being more efficient with communication. The researchers even did math problems to show how this works and what we can expect from it. Another problem was that when we clip our gradients (like in DP learning), it makes the models biased, which hurts FL performance. To fix this, they came up with a new way of clipping called DPSFL-AC. It worked really well in tests, showing that our privacy is protected, communication is efficient, and models are accurate.

Keywords

» Artificial intelligence  » Federated learning