Loading Now

Summary of Sprifed-omp: a Differentially Private Federated Learning Algorithm For Sparse Basis Recovery, by Ajinkya Kiran Mulay et al.


SPriFed-OMP: A Differentially Private Federated Learning Algorithm for Sparse Basis Recovery

by Ajinkya Kiran Mulay, Xiaojun Lin

First submitted to arxiv on: 29 Feb 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Cryptography and Security (cs.CR)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
As machine learning educators, we must understand the challenges of sparse basis recovery when dealing with Federated Learning (FL) settings. In this paper, researchers develop a new differentially private algorithm called SPriFed-OMP to address this issue. The algorithm combines secure multi-party computation and differential privacy to efficiently recover the true sparse basis for linear models using only O(sqrt(p)) samples. This approach is an enhancement of the original SPriFed-OMP, which terminates in a small number of steps and outperforms previous state-of-the-art DP-FL solutions in terms of accuracy-privacy trade-offs. The authors also present an enhanced version called SPriFed-OMP-GRAD based on gradient privatization.
Low GrooveSquid.com (original content) Low Difficulty Summary
Imagine you’re trying to learn about the human brain, but you can’t get all the data from one place because it’s too big and complicated. That’s kind of like what happens in Federated Learning (FL) when we want to use lots of different computers to help us figure out a problem together. The trick is to keep each computer’s information private while still learning something new. This paper develops a way to do just that using an algorithm called SPriFed-OMP. It works by combining two important ideas: keeping data private and finding the most important parts of it. The authors show that their method can learn about really big problems with surprisingly few “brain cells” (or data points). They also came up with a way to make it even better, which they call SPriFed-OMP-GRAD.

Keywords

* Artificial intelligence  * Federated learning  * Machine learning