Loading Now

Summary of Fairness-aware Job Scheduling For Multi-job Federated Learning, by Yuxin Shi et al.


Fairness-Aware Job Scheduling for Multi-Job Federated Learning

by Yuxin Shi, Han Yu

First submitted to arxiv on: 5 Jan 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Distributed, Parallel, and Cluster Computing (cs.DC)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Federated learning (FL) enables data owners to train machine learning models collaboratively without sharing sensitive data. Existing FL research focuses on a single FL server selecting clients for each training round. However, in practice, multiple FL servers may simultaneously try to select clients from the same pool. To address this gap, we propose FairFedJS, a fairness-aware federated job scheduling approach based on Lyapunov optimization. This approach ensures fair allocation of high-demand client datasets by considering current demand and job payment bids. Our experiments on two datasets demonstrate its significant advantages, outperforming state-of-the-art approaches in terms of scheduling fairness and convergence time while achieving comparable test accuracy.
Low GrooveSquid.com (original content) Low Difficulty Summary
Imagine a world where many people work together to train computer models without sharing their private data. This is called federated learning (FL). Currently, FL research focuses on one “server” choosing which groups to update the model for each training round. But what if multiple servers want to do this at the same time? To solve this problem, we created a new way of scheduling these updates called FairFedJS. It makes sure that high-demand data is fairly shared among all the servers, so nobody has to wait too long. We tested it on two big datasets and found that it outperformed other methods in terms of fairness and speed while still keeping the test accuracy high.

Keywords

* Artificial intelligence  * Federated learning  * Machine learning  * Optimization