Loading Now

Summary of Mitigating System Bias in Resource Constrained Asynchronous Federated Learning Systems, by Jikun Gao et al.


Mitigating System Bias in Resource Constrained Asynchronous Federated Learning Systems

by Jikun Gao, Ioannis Mavromatis, Peizheng Li, Pietro Carnelli, Aftab Khan

First submitted to arxiv on: 24 Jan 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
In this paper, researchers propose a new approach to federated learning (FL) that tackles performance challenges arising from heterogeneous devices and non-identically distributed data across clients. The proposed dynamic global model aggregation method scores and adjusts the weighting of client model updates based on their upload frequency, accommodating differences in device capabilities. Additionally, an updated global model is immediately provided to clients after they upload their local models, reducing idle time and improving training efficiency. The approach is evaluated in a simulated AFL deployment with 10 clients having heterogeneous compute constraints and non-IID data, using the FashionMNIST dataset. Results show over 10% and 19% improvement in global model accuracy compared to state-of-the-art methods PAPAYA and FedAsync, respectively.
Low GrooveSquid.com (original content) Low Difficulty Summary
Federated learning helps many devices learn together without sharing their own data. But this can be hard when devices have different capabilities and the data is not the same. Researchers found a way to make it work better by scoring how well each device does and adjusting what they share. They also made sure that every device gets an updated copy of the shared model, so they don’t waste time or get stuck. The new method worked great in tests with fake devices and data, getting 10% and 19% better results than other methods.

Keywords

* Artificial intelligence  * Federated learning