Loading Now

Summary of Hierarchical Split Federated Learning: Convergence Analysis and System Optimization, by Zheng Lin et al.


Hierarchical Split Federated Learning: Convergence Analysis and System Optimization

by Zheng Lin, Wei Wei, Zhe Chen, Chan-Tong Lam, Xianhao Chen, Yue Gao, Jun Luo

First submitted to arxiv on: 10 Dec 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Distributed, Parallel, and Cluster Computing (cs.DC); Networking and Internet Architecture (cs.NI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper tackles the challenge of deploying federated learning (FL) on resource-constrained edge devices, as AI models grow in size. It proposes a hierarchical split federated learning (HSFL) framework to reduce workload on edge devices via model splitting. The authors derive the convergence bound and formulate a joint optimization problem for model splitting (MS) and model aggregation (MA), which is decomposed into MS and MA subproblems solved using an iterative descending algorithm. Simulation results demonstrate the effectiveness of the tailored algorithm in optimizing MS and MA for SFL within multi-tier systems.
Low GrooveSquid.com (original content) Low Difficulty Summary
Federated learning helps devices share data without sharing the data itself. As AI models get bigger, it’s hard to use this technology on small devices like smart home gadgets or smartphones. Researchers came up with an idea called split federated learning (SFL) that makes it easier for these devices to participate in AI training. Most studies on SFL only looked at two-part systems. This paper looks at how to make SFL work well with more complex cloud-edge systems. It proposes a new way to do this, which they call hierarchical SFL. The authors also figure out how to optimize this approach and show that it works using computer simulations.

Keywords

» Artificial intelligence  » Federated learning  » Optimization