Loading Now

Summary of Fedfixer: Mitigating Heterogeneous Label Noise in Federated Learning, by Xinyuan Ji and Zhaowei Zhu and Wei Xi and Olga Gadyatskaya and Zilong Song and Yong Cai and Yang Liu


FedFixer: Mitigating Heterogeneous Label Noise in Federated Learning

by Xinyuan Ji, Zhaowei Zhu, Wei Xi, Olga Gadyatskaya, Zilong Song, Yong Cai, Yang Liu

First submitted to arxiv on: 25 Mar 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
In this paper, researchers propose a new approach called FedFixer to improve Federated Learning (FL) performance when dealing with noisy and heterogeneous labels. Existing methods struggle to distinguish between client-specific and noisy label samples, leading to poor performance. To address this issue, the authors introduce a personalized model that cooperates with a global model to select clean client-specific samples. They also employ confidence and distance regularizers to mitigate overfitting caused by limited local data and noisy labels. The results show that FedFixer effectively filters out noisy label samples on different clients, particularly in scenarios with highly heterogeneous label noise.
Low GrooveSquid.com (original content) Low Difficulty Summary
Federated Learning is a way for many devices to learn together without sharing their data. But when the labels (the correct answers) are not very good, it gets harder. This paper solves this problem by introducing two models: one that works only on each device and another that works everywhere. They make sure these models don’t get too different from each other, so they can work well together. The result is a better way to learn with bad labels.

Keywords

* Artificial intelligence  * Federated learning  * Overfitting