Summary of Fedlf: Adaptive Logit Adjustment and Feature Optimization in Federated Long-tailed Learning, by Xiuhua Lu et al.
FedLF: Adaptive Logit Adjustment and Feature Optimization in Federated Long-Tailed Learning
by Xiuhua Lu, Peng Li, Xuefeng Jiang
First submitted to arxiv on: 18 Sep 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: None
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper proposes a new federated learning approach called FedLF, designed to address the challenges of heterogeneous data and long-tailed distributions in real-world datasets. The traditional method for federated learning primarily addresses heterogeneity among clients but fails to consider class-wise bias in global long-tailed data, resulting in models that focus on head classes at the expense of tail classes. To mitigate this issue, FedLF introduces three modifications during local training: adaptive logit adjustment, continuous class-centered optimization, and feature decorrelation. The authors compare seven state-of-the-art methods with varying degrees of data heterogeneity and long-tailed distribution on benchmark datasets CIFAR-10-LT and CIFAR-100-LT, demonstrating that their approach effectively improves model performance. This work is particularly relevant for distributed machine learning applications where preserving privacy and achieving fair model performance are crucial. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Federated learning is a way to train machines to learn from many devices without sharing personal data. However, this method has some issues when dealing with real-world datasets that are not equally divided into categories. This paper proposes a new approach called FedLF to solve these problems. It makes three changes during local training: adjusting the logit values, optimizing class centers, and rearranging features. The authors tested their method against seven other methods on two benchmark datasets and found that it worked better. |
Keywords
» Artificial intelligence » Federated learning » Machine learning » Optimization