Summary of Mitigating Noise Detriment in Differentially Private Federated Learning with Model Pre-training, by Huitong Jin et al.
Mitigating Noise Detriment in Differentially Private Federated Learning with Model Pre-training
by Huitong Jin, Yipeng Zhou, Laizhong Cui, Quan Z. Sheng
First submitted to arxiv on: 18 Aug 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Cryptography and Security (cs.CR)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary Pre-training exploits public datasets to pre-train an advanced machine learning model, allowing it to be easily tuned for various downstream tasks. The paper explores how this pre-training can mitigate the negative impact of noise in differentially private federated learning (DPFL), which introduces private noises to protect model gradients. The study compares three approaches: head fine-tuning (HT), full fine-tuning (FT), and scratch training (ST). Pre-trained models were tuned with CIFAR-10, CHMNIST, and Fashion-MNIST datasets, showing that HT and FT can significantly reduce the impact of noise by reducing gradient exposure times. HT outperformed FT when the privacy budget was tight or the model size was large. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary DPFL is a way to train models on private data across multiple devices while keeping the data private. To make this work, DPFL adds special “noises” to the model’s updates, but these noises can actually hurt the model’s performance. The researchers found that pre-training the model before using it in DPFL can help reduce the impact of these noises and improve the model’s accuracy. |
Keywords
» Artificial intelligence » Federated learning » Fine tuning » Machine learning