Loading Now

Summary of Gradient Coding in Decentralized Learning For Evading Stragglers, by Chengxi Li and Mikael Skoglund


Gradient Coding in Decentralized Learning for Evading Stragglers

by Chengxi Li, Mikael Skoglund

First submitted to arxiv on: 6 Feb 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Signal Processing (eess.SP)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper proposes a decentralized learning method called GOCO (Gossip-based Decentralized Learning Method with Gradient Coding) that addresses the issue of stragglers in distributed learning scenarios. Traditional gradient coding techniques are not directly applicable to decentralized learning, so the authors develop a new approach that combines gossip-based averaging with stochastic gradient coding. The proposed method updates parameter vectors locally using encoded gradients and then averages them in a gossip-based manner. The paper analyzes the convergence performance of GOCO for strongly convex loss functions and provides simulation results demonstrating its superiority over baseline methods.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about a new way to learn together with other devices without a central computer, when some devices might not finish their job on time. They call this “stragglers”. Right now, we don’t have a good solution for this problem in decentralized learning. The authors suggest a new method called GOCO that combines two ideas: gossip-based averaging and gradient coding. This helps devices learn together more efficiently. They show that GOCO works well and is better than other methods.

Keywords

* Artificial intelligence