Loading Now

Summary of Private Heterogeneous Federated Learning Without a Trusted Server Revisited: Error-optimal and Communication-efficient Algorithms For Convex Losses, by Changyu Gao et al.


Private Heterogeneous Federated Learning Without a Trusted Server Revisited: Error-Optimal and Communication-Efficient Algorithms for Convex Losses

by Changyu Gao, Andrew Lowy, Xingyu Zhou, Stephen J. Wright

First submitted to arxiv on: 12 Jul 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Cryptography and Security (cs.CR); Optimization and Control (math.OC)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
A machine learning educator writing for a technical audience can summarize this paper as follows: This research focuses on federated learning (FL) with private data from individuals who don’t trust servers or other clients. Each silo, like a hospital, has data from multiple people and needs to protect their privacy, even if the server and/or other silos try to uncover it. The authors propose Inter-Silo Record-Level Differential Privacy (ISRL-DP) to prevent each silo’s data from being leaked by requiring item-level differential privacy in communications. Prior work characterized optimal excess risk bounds for ISRL-DP algorithms with homogeneous data and convex loss functions, but left open questions about heterogeneous data and communication rounds. This paper answers both questions positively, providing novel ISRL-DP FL algorithms that achieve optimal excess risk bounds with heterogeneous silo data and fewer communication rounds. These algorithms are more efficient than the prior state-of-the-art in terms of communication complexity and computational efficiency.
Low GrooveSquid.com (original content) Low Difficulty Summary
This research is about a way to share medical information from different hospitals while keeping patient privacy safe. The main challenge is that each hospital has confidential records, and they don’t trust other hospitals or servers to keep those records private. To solve this problem, the authors developed a new method called Inter-Silo Record-Level Differential Privacy (ISRL-DP). This method makes sure that each hospital’s data remains private even if others try to access it. The paper improves upon previous work by showing how to use ISRL-DP with different types of medical records and by reducing the amount of communication needed between hospitals.

Keywords

» Artificial intelligence  » Federated learning  » Machine learning