Summary of Federated Transfer Learning with Differential Privacy, by Mengchu Li et al.
Federated Transfer Learning with Differential Privacy
by Mengchu Li, Ye Tian, Yang Feng, Yi Yu
First submitted to arxiv on: 17 Mar 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Cryptography and Security (cs.CR); Statistics Theory (math.ST); Methodology (stat.ME); Machine Learning (stat.ML)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper addresses two key challenges in Federated Learning: data heterogeneity and privacy. It presents a framework for transferring information from multiple heterogeneous datasets while maintaining privacy constraints. The authors rigorously formulate the concept of “Federated Differential Privacy” which provides guarantees without relying on a trusted central server. They investigate three statistical problems: univariate mean estimation, low-dimensional linear regression, and high-dimensional linear regression. Their analysis reveals the costs of both data heterogeneity and privacy in Federated Learning, highlighting the benefits of knowledge transfer across datasets. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper helps us learn more about computers and how they can work together while keeping our personal information safe. It shows how to share information from many different sources without putting that information at risk. The authors come up with a new way to keep data private called “Federated Differential Privacy”. They test this idea on three big problems: finding the average, doing simple math problems, and analyzing lots of data. Their results show that we have to balance the need for privacy with the need to share information. |
Keywords
* Artificial intelligence * Federated learning * Linear regression