Loading Now

Summary of Knowledge Distillation in Federated Learning: a Survey on Long Lasting Challenges and New Solutions, by Laiqiao Qin et al.


Knowledge Distillation in Federated Learning: a Survey on Long Lasting Challenges and New Solutions

by Laiqiao Qin, Tianqing Zhu, Wanlei Zhou, Philip S. Yu

First submitted to arxiv on: 16 Jun 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Distributed, Parallel, and Cluster Computing (cs.DC)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper presents a comprehensive review of Knowledge Distillation (KD)-based Federated Learning (FL), focusing on addressing the challenges in traditional FL. KD, a validated model compression algorithm, enables knowledge transfer between models by exchanging logits at intermediate or output layers. The authors discuss how KD can address privacy risks, data heterogeneity, communication bottlenecks, and system heterogeneity issues in FL. They provide an overview of KD-based FL, including its motivation, basics, taxonomy, and a comparison with traditional FL. The paper also analyzes the critical factors in KD-based FL, such as teachers, knowledge, data, and methods. Furthermore, it discusses how KD can improve communication efficiency, personalization, and privacy protection in FL. This survey aims to provide insights and guidance for researchers and practitioners working on FL.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper talks about a way to make machine learning models work better together without sharing all their data. It’s called Federated Learning (FL), and it helps keep the data private. The authors look at how adding another technique, called Knowledge Distillation (KD), can help FL overcome some of its challenges. They explain what KD does and how it can improve communication efficiency, personalization, and privacy protection in FL. This paper is a review of all the current ideas on using KD with FL and hopes to provide helpful insights for researchers working on this topic.

Keywords

» Artificial intelligence  » Federated learning  » Knowledge distillation  » Logits  » Machine learning  » Model compression