Loading Now

Summary of Towards Dynamic Resource Allocation and Client Scheduling in Hierarchical Federated Learning: a Two-phase Deep Reinforcement Learning Approach, by Xiaojing Chen et al.


Towards Dynamic Resource Allocation and Client Scheduling in Hierarchical Federated Learning: A Two-Phase Deep Reinforcement Learning Approach

by Xiaojing Chen, Zhenyuan Li, Wei Ni, Xin Wang, Shunqing Zhang, Yanzan Sun, Shugong Xu, Qingqi Pei

First submitted to arxiv on: 21 Jun 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Distributed, Parallel, and Cluster Computing (cs.DC); Optimization and Control (math.OC)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper presents a new framework for federated learning (FL) in hierarchical FL systems powered by energy harvesting. The proposed two-phase deep deterministic policy gradient (DDPG) framework, called TP-DDPG, balances the trade-off between online learning delay and model accuracy. It achieves this by optimizing the selection of participating clients, their CPU configurations, and transmission powers. The framework is evaluated through experiments, which demonstrate a 39.4% reduction in training time compared to benchmarks when achieving a test accuracy of 0.9.
Low GrooveSquid.com (original content) Low Difficulty Summary
Federated learning lets machines learn together without sharing data. This paper makes it work better on devices that run on batteries and have limited energy. The team created a new way to balance speed and accuracy, using a “two-phase” approach. They tested their idea and found it can train models 39.4% faster than before.

Keywords

* Artificial intelligence  * Federated learning  * Online learning