Loading Now

Summary of Split Federated Learning Over Heterogeneous Edge Devices: Algorithm and Optimization, by Yunrui Sun and Gang Hu and Yinglei Teng and Dunbo Cai


Split Federated Learning Over Heterogeneous Edge Devices: Algorithm and Optimization

by Yunrui Sun, Gang Hu, Yinglei Teng, Dunbo Cai

First submitted to arxiv on: 21 Nov 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Distributed, Parallel, and Cluster Computing (cs.DC); Neural and Evolutionary Computing (cs.NE)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed Heterogeneous Split Federated Learning (HSFL) framework enables resource-constrained devices to train personalized models simultaneously, leveraging different cut layers. This approach tackles limitations in training efficiency and prolonged latency in sequential settings by optimizing computational and transmission resources jointly. HSFL outperforms other frameworks in terms of convergence rate and model accuracy on heterogeneous devices with non-iid data.
Low GrooveSquid.com (original content) Low Difficulty Summary
A new way to help computers train models without sharing all their information is called Split Learning (SL). But, current SL methods can take a long time and use too much energy. To fix this, researchers created the Heterogeneous Split Federated Learning (HSFL) framework. It lets devices train their own models at the same time, using different parts of the model. This makes it faster and more efficient.

Keywords

* Artificial intelligence  * Federated learning