Loading Now

Summary of Adaptsfl: Adaptive Split Federated Learning in Resource-constrained Edge Networks, by Zheng Lin et al.


AdaptSFL: Adaptive Split Federated Learning in Resource-constrained Edge Networks

by Zheng Lin, Guanqiao Qu, Wei Wei, Xianhao Chen, Kin K. Leung

First submitted to arxiv on: 19 Mar 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Distributed, Parallel, and Cluster Computing (cs.DC)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
A novel federated learning approach called Split Federated Learning (SFL) has emerged as a promising solution for democratizing deep neural networks on resource-limited edge devices. SFL offloads primary training workload to a server via model partitioning, enabling parallel training among edge devices. However, system optimization significantly influences SFL performance under resource-constrained systems. This paper provides a convergence analysis of SFL, quantifying the impact of model splitting (MS) and client-side model aggregation (MA) on learning performance. The proposed AdaptSFL framework adaptively controls MS and MA to balance communication-computing latency and training convergence. Extensive simulations validate that AdaptSFL takes less time to achieve a target accuracy than benchmarks, demonstrating its effectiveness.
Low GrooveSquid.com (original content) Low Difficulty Summary
Split Federated Learning is a new way to make deep neural networks work on devices with limited resources. It works by breaking down the big training job into smaller pieces and sending some of them to other devices. This helps keep the main processing simple and fast. The problem is that it’s hard to get this system optimized for devices with different amounts of power. Researchers are trying to solve this problem. They did a study on how well this system works when it’s split up into smaller parts and sent to different devices. They also made a new way to control how the system works, called AdaptSFL. This helps make sure that the devices don’t waste time or energy.

Keywords

* Artificial intelligence  * Federated learning  * Optimization