Loading Now

Summary of Communication Efficient Confederated Learning: An Event-triggered Saga Approach, by Bin Wang and Jun Fang and Hongbin Li and Yonina C. Eldar


Communication Efficient ConFederated Learning: An Event-Triggered SAGA Approach

by Bin Wang, Jun Fang, Hongbin Li, Yonina C. Eldar

First submitted to arxiv on: 28 Feb 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Distributed, Parallel, and Cluster Computing (cs.DC); Signal Processing (eess.SP)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper presents a novel approach to federated learning (FL) called Confederated Learning (CFL), which addresses limitations in standard FL by enabling training with multiple servers and many users. The CFL system consists of networked edge servers, each connected to individual user sets. To reduce communication overhead, the authors propose a stochastic gradient method incorporating a conditionally-triggered user selection (CTUS) mechanism. This allows each server to select a small number of users to upload gradients, maintaining convergence performance. Theoretical analysis shows linear convergence rate, and simulations demonstrate substantial improvement in communication efficiency compared to state-of-the-art algorithms.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper is about a new way to train models without sharing all the data. It’s called Federated Learning (FL) and it works by having many small computers work together to train one model. The problem with FL is that it can only handle so many users at once, which limits how good the model can be. To fix this, the authors created a new version of FL called Confederated Learning (CFL). CFL has multiple computers working together and lets each computer choose which users’ data to use for training. This helps reduce the amount of data that needs to be sent between computers. The authors also developed a special way to make sure the model is still good even though less data is being used. They tested this new method and found it worked much better than other methods.

Keywords

* Artificial intelligence  * Federated learning