Loading Now

Summary of Fedloge: Joint Local and Generic Federated Learning Under Long-tailed Data, by Zikai Xiao et al.


FedLoGe: Joint Local and Generic Federated Learning under Long-tailed Data

by Zikai Xiao, Zihan Chen, Liyinglan Liu, Yang Feng, Jian Wu, Wanlu Liu, Joey Tianyi Zhou, Howard Hao Yang, Zuozhu Liu

First submitted to arxiv on: 17 Jan 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Federated Long-Tailed Learning (Fed-LT) is a paradigm that has gained attention recently. In Fed-LT, data collected from decentralized clients follows a globally prevalent long-tailed distribution. Existing works have focused on addressing the data imbalance issue to improve global model performance while neglecting local level performance. This paper introduces Federated Local and Generic Model Training in Fed-LT (FedLoGe), which enhances both local and generic model performance through representation learning, classifier alignment, and a neural collapse framework. The approach utilizes a shared backbone for capturing global trends and individualized classifiers to encapsulate local features. A Static Sparse Equiangular Tight Frame Classifier (SSE-C) is developed, inspired by neural collapse principles that naturally prune noisy features and foster data representations. Additionally, Global and Local Adaptive Feature Realignment (GLA-FR) is proposed via a global classifier and personalized Euclidean norm transfer to align global features with client preferences. Experimental results on CIFAR-10/100-LT, ImageNet, and iNaturalist demonstrate the superiority of FedLoGe over state-of-the-art pFL and Fed-LT approaches.
Low GrooveSquid.com (original content) Low Difficulty Summary
Federated learning is a way for many devices to work together without sharing their data. When the data from these devices is used together, it can be very unbalanced. This paper talks about a new method that helps with this imbalance problem. The method uses two types of models: one that works well globally and one that is specific to each device. It also uses a special type of framework called neural collapse. The results show that this method works better than other methods for balancing the data.

Keywords

* Artificial intelligence  * Alignment  * Attention  * Federated learning  * Representation learning