Loading Now

Summary of Banded Square Root Matrix Factorization For Differentially Private Model Training, by Nikita P. Kalinin et al.


Banded Square Root Matrix Factorization for Differentially Private Model Training

by Nikita P. Kalinin, Christoph Lampert

First submitted to arxiv on: 22 May 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Cryptography and Security (cs.CR)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Differentially private model training relies heavily on matrix factorization techniques, but current state-of-the-art methods incur high computational costs due to complex optimization problems. The proposed BSR (Batched Square Root) approach revolutionizes this process by leveraging the standard matrix square root’s properties, allowing for efficient handling of large-scale problems. In the context of stochastic gradient descent with momentum and weight decay, analytical expressions for BSR reduce computational overhead to near zero. Theoretical bounds on approximation quality are established for both centralized and federated learning settings. Numerical experiments confirm that models trained using BSR achieve parity with existing methods while avoiding their computational burdens.
Low GrooveSquid.com (original content) Low Difficulty Summary
Imagine you want to train a model, but you need to make sure it keeps your data private. Current methods do this by solving complex math problems, which takes a long time. Researchers have found a way to speed up this process using something called BSR (Batched Square Root). This new method is efficient and can handle big datasets. It works just as well as the old methods but much faster. The scientists tested it and found that it’s just as good at making predictions while keeping data private.

Keywords

» Artificial intelligence  » Federated learning  » Optimization  » Stochastic gradient descent