Loading Now

Summary of Enhancing One-shot Federated Learning Through Data and Ensemble Co-boosting, by Rong Dai et al.


Enhancing One-Shot Federated Learning Through Data and Ensemble Co-Boosting

by Rong Dai, Yonggang Zhang, Ang Li, Tongliang Liu, Xun Yang, Bo Han

First submitted to arxiv on: 23 Feb 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Medium Difficulty summary: One-shot Federated Learning (OFL) is a promising approach that enables global server model training via a single communication round. The server model aggregates knowledge from all client models through distillation, where clients also synthesize samples for distillation. Recent works have shown that the server model’s performance is linked to the quality of synthesized data and the ensemble model. To advance OFL, we propose Co-Boosting, a novel framework that synergistically enhances synthesized data and the ensemble model. Co-Boosting employs the current ensemble model to generate high-quality samples through adversarial synthesis, then uses these hard samples to refine the ensemble model by adjusting ensembling weights for each client model. Our experiments demonstrate that Co-Boosting outperforms existing baselines across various settings, with benefits including no need for local training adjustments, no additional data or model transmission required, and heterogeneous client architectures allowed.
Low GrooveSquid.com (original content) Low Difficulty Summary
Low Difficulty summary: Imagine a way to train a single global model without needing to collect more data or share models. One-shot Federated Learning (OFL) does just that! It’s like distilling knowledge from all the individual models on each device into one global model. But, for OFL to work well, you need high-quality “training samples” and a good mix of all the individual models’ strengths. To make this happen, we created Co-Boosting, which helps improve both the quality of these training samples and the combined strength of all the individual models. This approach outperforms other methods in many cases and has some nice benefits like not needing to adjust each device’s local training or share extra data.

Keywords

* Artificial intelligence  * Boosting  * Distillation  * Ensemble model  * Federated learning  * One shot