Loading Now

Summary of Cafe: Cost and Age Aware Federated Learning, by Sahan Liyanaarachchi et al.


CAFe: Cost and Age aware Federated Learning

by Sahan Liyanaarachchi, Kanchana Thilakarathna, Sennur Ulukus

First submitted to arxiv on: 24 May 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Distributed, Parallel, and Cluster Computing (cs.DC); Information Theory (cs.IT)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Federated learning (FL) models often employ a strategy where clients wait for at least M out of N clients to send local gradients within a reporting deadline T. If not enough clients respond, the round is considered failed and restarted from scratch. This approach can lead to wasted resources if clients fail to report back in time. Optimizing parameters M and T to minimize communication cost and resource wastage while maintaining convergence rate is crucial. In this paper, we show that average client age at the parameter server appears explicitly in theoretical convergence bounds, making it a useful metric for quantifying global model convergence. We provide an analytical scheme for selecting M and T in FL settings.
Low GrooveSquid.com (original content) Low Difficulty Summary
Federated learning (FL) is a way to train models on many devices without sharing their data. In FL, some clients need to send updates to the main server within a certain time frame. If not enough clients send updates on time, the process needs to start over. This can waste time and resources. The goal of this paper is to find the best combination of parameters that allows FL models to converge quickly while minimizing wasted resources.

Keywords

» Artificial intelligence  » Federated learning