Summary of Training Heterogeneous Client Models Using Knowledge Distillation in Serverless Federated Learning, by Mohak Chadha et al.
Training Heterogeneous Client Models using Knowledge Distillation in Serverless Federated Learning
by Mohak Chadha, Pulkit Khera, Jianfeng Gu, Osama Abboud, Michael Gerndt
First submitted to arxiv on: 11 Feb 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI); Distributed, Parallel, and Cluster Computing (cs.DC)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper tackles the limitations of Federated Learning (FL) by introducing novel optimized serverless workflows for two popular conventional federated Knowledge Distillation techniques, FedMD and FedDF. These workflows are designed to address resource and statistical data heterogeneity among FL clients. The proposed methods utilize an open-source serverless FL system called FedLess and demonstrate improved accuracy, fine-grained training times, and costs compared to existing approaches. Specifically, the authors show that serverless FedDF is more robust to extreme non-IID data distributions, is faster, and leads to lower costs than serverless FedMD. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Federated Learning is a way for many devices or computers to work together using artificial intelligence. It’s like a team project where everyone contributes their own piece of information without sharing the whole thing. The problem is that not all teams have the same resources or knowledge, so it can be hard to get everyone working together. This paper helps solve this problem by creating new ways for devices to share their information and work together more efficiently. |
Keywords
* Artificial intelligence * Federated learning * Knowledge distillation