Summary of Communication-efficient Federated Learning Through Adaptive Weight Clustering and Server-side Distillation, by Vasileios Tsouvalas et al.
Communication-Efficient Federated Learning through Adaptive Weight Clustering and Server-Side Distillation
by Vasileios Tsouvalas, Aaqib Saeed, Tanir Ozcelebi, Nirvana Meratnia
First submitted to arxiv on: 25 Jan 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Distributed, Parallel, and Cluster Computing (cs.DC)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed Federated Learning (FL) technique, FedCompress, addresses the issue of excessive communication costs during model training by combining dynamic weight clustering and server-side knowledge distillation. This approach reduces communication costs while learning highly generalizable models. Compared to baselines, FedCompress demonstrates efficacy in terms of both communication costs and inference speed on diverse public datasets. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary FedCompress is a new way to train deep neural networks with lots of devices without sharing their data. It’s like sending a summary instead of the whole book. The team showed that this method can be fast and efficient, which means it could help make artificial intelligence more private and reliable. |
Keywords
* Artificial intelligence * Clustering * Federated learning * Inference * Knowledge distillation