Loading Now

Summary of Adaptive Federated Learning in Heterogeneous Wireless Networks with Independent Sampling, by Jiaxiang Geng et al.


Adaptive Federated Learning in Heterogeneous Wireless Networks with Independent Sampling

by Jiaxiang Geng, Yanzhao Hou, Xiaofeng Tao, Juncheng Wang, Bing Luo

First submitted to arxiv on: 15 Feb 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Networking and Internet Architecture (cs.NI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper proposes a new approach to Federated Learning (FL) algorithms, addressing limitations in client sampling methods for heterogeneous wireless networks. The authors introduce an independent client sampling strategy to minimize wall-clock training time while considering data and system heterogeneity. They derive a new convergence bound for non-convex loss functions with independent client sampling and propose an adaptive bandwidth allocation scheme. Additionally, they present an efficient algorithm based on upper bounds on convergence rounds and expected per-round training time. Experimental results demonstrate that the proposed approach outperforms current best practices under various training models and datasets.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper makes it easier for devices to work together to learn from data without sharing their own information. Currently, some methods pick a few devices at random, but this can be slow or unfair if the devices are very different. The authors suggest a new way of choosing devices that works well even when they’re very different and have different amounts of data. They also come up with ways to make sure the devices don’t get stuck in their learning process. This helps improve communication efficiency and reduces the time it takes for all devices to learn together.

Keywords

* Artificial intelligence  * Federated learning