Loading Now

Summary of Overlay-based Decentralized Federated Learning in Bandwidth-limited Networks, by Yudi Huang et al.


Overlay-based Decentralized Federated Learning in Bandwidth-limited Networks

by Yudi Huang, Tingyang Sun, Ting He

First submitted to arxiv on: 8 Aug 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Networking and Internet Architecture (cs.NI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
A decentralized federated learning (DFL) framework is proposed to improve AI deployment by directly learning across distributed agents without centralized coordination. While existing solutions focus on physical adjacency of neighboring agents, this paper addresses the gap by leveraging network tomography to jointly design communication demands and schedules for overlay-based DFL in bandwidth-limited networks. By decomposing the problem into optimization problems, our solution minimizes total training time, accelerating DFL compared to state-of-the-art designs. Our approach is evaluated through extensive simulations using real-world datasets.
Low GrooveSquid.com (original content) Low Difficulty Summary
A new way of teaching artificial intelligence (AI) allows devices to learn from each other without needing a central controller. This helps AI be used more widely in places with limited internet connections. The current method for sharing information between devices assumes they are close together, which is not always the case. In this research, we find a better way by using tools that can map the network structure. We break down our problem into smaller, easier-to-solve parts and show that it works well in practice.

Keywords

» Artificial intelligence  » Federated learning  » Optimization