Loading Now

Summary of The Privacy Power Of Correlated Noise in Decentralized Learning, by Youssef Allouah et al.


The Privacy Power of Correlated Noise in Decentralized Learning

by Youssef Allouah, Anastasia Koloskova, Aymane El Firdoussi, Martin Jaggi, Rachid Guerraoui

First submitted to arxiv on: 2 May 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Cryptography and Security (cs.CR); Distributed, Parallel, and Cluster Computing (cs.DC); Optimization and Control (math.OC); Machine Learning (stat.ML)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed Decor framework combines decentralized learning with differential privacy (DP) guarantees, allowing users to securely share models while maintaining privacy. In this variant of decentralized stochastic gradient descent (SGD), users exchange randomness seeds to generate pairwise-canceling correlated Gaussian noises, which are injected into local models to protect them from leakage. Theoretical and empirical results demonstrate that Decor matches the central DP optimal privacy-utility trade-off for arbitrary connected graphs under SecLDP, a new relaxation of local DP. This framework also includes a companion privacy accountant for public use.
Low GrooveSquid.com (original content) Low Difficulty Summary
Decentralized learning is an exciting way to share data without relying on central authorities. However, it’s not completely private, as users can still learn from each other’s models. To fix this, researchers propose Decor, a new way to share models while keeping them private. It works by having users exchange random numbers that cancel out the noise in their own model updates. This ensures that even if someone tries to sneak a peek at another user’s model, they won’t be able to learn anything useful. The team shows that this approach is both theoretically sound and practical.

Keywords

» Artificial intelligence  » Stochastic gradient descent