Loading Now

Summary of Harnessing Increased Client Participation with Cohort-parallel Federated Learning, by Akash Dhasade et al.


Harnessing Increased Client Participation with Cohort-Parallel Federated Learning

by Akash Dhasade, Anne-Marie Kermarrec, Tuan-Anh Nguyen, Rafael Pires, Martijn de Vos

First submitted to arxiv on: 24 May 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Distributed, Parallel, and Cluster Computing (cs.DC)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The abstract presents a novel approach to Federated Learning (FL) called Cohort-Parallel Federated Learning (CPFL). FL is a collaborative machine learning process where nodes update a global model. However, as more nodes participate, individual updates become less effective. To address this issue, CPFL divides the network into smaller cohorts that independently train their own models using FL until convergence. The produced models are then unified using knowledge distillation. The study investigates the balance between the number of cohorts, model accuracy, training time, and compute resources on realistic traces and non-IID data distributions for CIFAR-10 and FEMNIST image classification tasks. Results show that CPFL with four cohorts reduces train time by 1.9x and resource usage by 1.3x while maintaining test accuracy.
Low GrooveSquid.com (original content) Low Difficulty Summary
Federated learning is a way for computers to work together to learn from each other’s data. But as more computers join in, it gets harder for them to agree on what they’ve learned. To solve this problem, scientists came up with a new idea called CPFL. Instead of having all the computers work together at once, they divide them into smaller groups that work separately and then share their results. This makes it faster and more efficient. The researchers tested this approach using real-world data and found that it can reduce the time it takes to train a model by 1.9 times and use fewer resources by 1.3 times without sacrificing accuracy.

Keywords

» Artificial intelligence  » Federated learning  » Image classification  » Knowledge distillation  » Machine learning