Loading Now

Summary of Whale-fl: Wireless and Heterogeneity Aware Latency Efficient Federated Learning Over Mobile Devices Via Adaptive Subnetwork Scheduling, by Huai-an Su et al.


WHALE-FL: Wireless and Heterogeneity Aware Latency Efficient Federated Learning over Mobile Devices via Adaptive Subnetwork Scheduling

by Huai-an Su, Jiaxiang Geng, Liang Li, Xiaoqi Qin, Yanzhao Hou, Hao Wang, Xin Fu, Miao Pan

First submitted to arxiv on: 1 May 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Networking and Internet Architecture (cs.NI); Image and Video Processing (eess.IV)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed federated learning approach, called Wireless and Heterogeneity Aware Latency Efficient FL (WHALE-FL), aims to accelerate the training process by adapting subnetwork size assignments for mobile devices based on their dynamic computing and communication conditions. This is in contrast to traditional fixed-size subnetwork assignment methods that ignore these changes. The authors develop a novel utility function that captures device and federated learning dynamics, guiding mobile devices to select the optimal subnetwork size for local training. This approach enables faster and more accurate federated learning without sacrificing learning accuracy.
Low GrooveSquid.com (original content) Low Difficulty Summary
Federated learning over mobile devices is an exciting area of research! Currently, it’s challenging to deploy this technology because different devices have varying computing and communication abilities. Some researchers suggested using smaller parts of the global model for each device to train locally, but this approach doesn’t consider how devices change or the training process itself. To solve these issues, a new method called WHALE-FL was created. It uses a special formula to help devices choose the right size of their local training pieces based on their abilities and the learning process. This makes federated learning faster and more accurate.

Keywords

» Artificial intelligence  » Federated learning