Loading Now

Summary of Distributed Quasi-newton Robust Estimation Under Differential Privacy, by Chuhan Wang et al.


Distributed quasi-Newton robust estimation under differential privacy

by Chuhan Wang, Lixing Zhu, Xuehu Zhu

First submitted to arxiv on: 22 Aug 2024

Categories

  • Main: Machine Learning (stat.ML)
  • Secondary: Machine Learning (cs.LG); Statistics Theory (math.ST)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper proposes a robust quasi-Newton estimation method for distributed computing under privacy protection constraints in Byzantine machine settings. This approach only requires nodes to transmit five vectors to the central processor, offering advantages over gradient descent and Newton iteration methods in terms of reducing privacy budgeting and transmission costs. The proposed algorithm does not rely on bounded gradients and second-order derivatives, making it suitable for scenarios where these quantities follow sub-exponential distributions. Numerical studies demonstrate the effectiveness of the proposed method.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper develops a new way to do calculations on many computers that are connected to each other but have privacy concerns. They create an algorithm that only needs five pieces of information to be sent from each computer, which is better than other methods that need more information or take longer. This algorithm works well even if the information being processed has unusual patterns. The researchers tested their method on made-up and real data sets and showed it was effective.

Keywords

* Artificial intelligence  * Gradient descent