Loading Now

Summary of Federated Distillation: a Survey, by Lin Li and Jianping Gou and Baosheng Yu and Lan Du and Zhang Yiand Dacheng Tao


Federated Distillation: A Survey

by Lin Li, Jianping Gou, Baosheng Yu, Lan Du, Zhang Yiand Dacheng Tao

First submitted to arxiv on: 2 Apr 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper presents Federated Distillation (FD), a novel approach to Federated Learning (FL) that enables more flexible knowledge transfer between clients and the server. By integrating Knowledge Distillation (KD) into FL, FD eliminates the need for identical model architectures across clients and the server, significantly reducing communication costs associated with training large-scale models. The authors provide a comprehensive overview of FD, covering its fundamental principles, various approaches to tackling challenges, and diverse applications across different scenarios.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about a way to train artificial intelligence models without sharing sensitive data. Right now, this type of model training is limited because it requires all devices involved to have the same architecture and can be very slow if many devices are involved. To fix these problems, researchers combined two ideas: Federated Learning (training together) and Knowledge Distillation (transferring knowledge). The result is called Federated Distillation. It allows more flexibility in training models, making it faster and more useful. This paper explains how it works, the challenges it solves, and different ways it can be used.

Keywords

* Artificial intelligence  * Distillation  * Federated learning  * Knowledge distillation