Loading Now

Summary of On the Convergence Of Decentralized Federated Learning Under Imperfect Information Sharing, by Vishnu Pandi Chellapandi et al.


On the Convergence of Decentralized Federated Learning Under Imperfect Information Sharing

by Vishnu Pandi Chellapandi, Antesh Upadhyay, Abolfazl Hashemi, Stanislaw H /.Zak

First submitted to arxiv on: 19 Mar 2023

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Systems and Control (eess.SY)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This research paper presents three algorithms for Decentralized Federated Learning (DFL) in scenarios where communication between agents may be imperfect. The algorithms, FedNDL1, FedNDL2, and another proposal, aim to simulate noisy communication channels by adding noise to the parameters or performing gossip averaging before gradient optimization. These methods can be applied to various decentralized learning and optimization problems, including federated learning, which is a central problem in control.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper shows how to do learning with lots of devices that share information. When these devices talk to each other, they might get some messages wrong. The researchers found three ways to make sure the devices can still learn from each other despite this noise. These methods help solve a big problem in control theory and can be used for things like letting many devices work together on a task.

Keywords

* Artificial intelligence  * Federated learning  * Optimization