Loading Now

Summary of Federated Learning with Label-masking Distillation, by Jianghu Lu and Shikun Li and Kexin Bao and Pengju Wang and Zhenxing Qian and Shiming Ge


Federated Learning with Label-Masking Distillation

by Jianghu Lu, Shikun Li, Kexin Bao, Pengju Wang, Zhenxing Qian, Shiming Ge

First submitted to arxiv on: 20 Sep 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Cryptography and Security (cs.CR); Computer Vision and Pattern Recognition (cs.CV)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
In this paper, researchers address the issue of label distribution skew in federated learning, where different clients have varying label distributions. They propose a novel approach called FedLMD that leverages label masking and distillation to improve performance in these scenarios. The method involves classifying labels into majority and minority classes and having clients learn from local data while preserving minority label knowledge. Experimental results demonstrate state-of-the-art performance, and a variant of the approach (FedLMD-Tf) is shown to outperform previous lightweight methods without increasing computational costs. This paper provides a significant contribution to the field of federated learning and has implications for privacy-preserving collaborative model training.
Low GrooveSquid.com (original content) Low Difficulty Summary
Federated learning helps computers share knowledge while keeping data private. In this research, scientists solve a problem that happens when different devices have different kinds of data labeled differently. They create an innovative method called FedLMD that lets devices learn from their own data and also preserve the information about minority labels. This approach can lead to better results in certain situations. The researchers tested their idea and found it works well, even without needing extra help from a powerful teacher model.

Keywords

» Artificial intelligence  » Distillation  » Federated learning  » Teacher model