Loading Now

Summary of Low-cost Privacy-preserving Decentralized Learning, by Sayan Biswas et al.


Low-Cost Privacy-Preserving Decentralized Learning

by Sayan Biswas, Davide Frey, Romaric Gaudel, Anne-Marie Kermarrec, Dimitri Lerévérend, Rafael Pires, Rishi Sharma, François Taïani

First submitted to arxiv on: 18 Mar 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Distributed, Parallel, and Cluster Computing (cs.DC)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper introduces Zip-DL, a privacy-aware decentralized learning algorithm that enables nodes to train models collectively without sharing raw data or relying on a central server. By leveraging correlated noise, Zip-DL achieves robust privacy against local adversaries while ensuring efficient convergence at low communication costs. The design requires only one communication round per gradient descent iteration, significantly reducing overhead compared to competitors. Theoretical bounds are established for both convergence speed and privacy guarantees. Extensive experiments demonstrate the practical applicability of Zip-DL, outperforming state-of-the-art methods in the accuracy vs. vulnerability trade-off. Specifically, Zip-DL reduces membership-inference attack success rates by up to 35%, decreases attack efficacy by up to 13%, and achieves up to 59% higher accuracy compared to a state-of-the-art privacy-preserving approach under the same threat model.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about a new way for computers to learn together without sharing their data or using a central server. It’s called Zip-DL, and it uses special noise to keep information private while still getting good results. This algorithm can work with very little communication between the computers, making it efficient and practical for real-world use. The paper shows that Zip-DL is better than other methods at balancing accuracy and privacy. It can reduce the success rate of attacks by up to 35% and increase its own accuracy by up to 59%. This makes it a useful solution for many applications.

Keywords

* Artificial intelligence  * Gradient descent  * Inference