Loading Now

Summary of Fedcomloc: Communication-efficient Distributed Training Of Sparse and Quantized Models, by Kai Yi et al.


FedComLoc: Communication-Efficient Distributed Training of Sparse and Quantized Models

by Kai Yi, Georg Meinhardt, Laurent Condat, Peter Richtárik

First submitted to arxiv on: 14 Mar 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Distributed, Parallel, and Cluster Computing (cs.DC)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper presents an innovative approach to Federated Learning (FL) that addresses a critical bottleneck: communication cost. The authors propose FedComLoc, an algorithm that combines local training with compression techniques to reduce the communication complexity of FL. Building upon the Scaffnew algorithm, which has previously shown promise in this area, FedComLoc integrates practical and effective compression methods to further enhance efficiency. Experimental results using popular compressors and quantization demonstrate the effectiveness of FedComLoc in heterogeneous settings.
Low GrooveSquid.com (original content) Low Difficulty Summary
Federated Learning is a way for different devices or computers to work together without sharing their personal data. One problem with this approach is that it can take a lot of time and energy to send all the information back and forth between devices. To solve this, researchers have developed a technique called Local Training, which lets devices do some extra learning on their own before sending results to the main server. A new algorithm called Scaffnew has already shown promise in reducing communication costs in Federated Learning. This paper introduces FedComLoc, an even better way to make Federated Learning more efficient by combining Local Training with special compression methods. The results show that this approach can really help reduce the time and energy needed for devices to work together.

Keywords

* Artificial intelligence  * Federated learning  * Quantization