Summary of An Enhanced Federated Prototype Learning Method Under Domain Shift, by Liang Kuang et al.
An Enhanced Federated Prototype Learning Method under Domain Shift
by Liang Kuang, Kuangpu Guo, Jian Liang, Jianguo Zhang
First submitted to arxiv on: 27 Sep 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper introduces Federated Prototype Learning with Convergent Clusters (FedPLCC), a novel approach to federated learning that addresses data heterogeneity across clients. By incorporating variance-aware dual-level prototype clustering and an α-sparsity prototype loss, FedPLCC increases intra-class similarity and reduces inter-class similarity. The algorithm weights each prototype by the size of its corresponding cluster, increasing inter-class distances, and selects only a proportion of prototypes for loss calculation to reduce intra-class distances. Evaluation on Digit-5, Office-10, and DomainNet datasets demonstrates improved performance compared to existing methods. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Federated learning is a way for computers to work together without sharing private data. The problem is that when the computers have different types of information, it’s harder to make good decisions. A new method called Federated Prototype Learning with Convergent Clusters (FedPLCC) tries to solve this problem by grouping similar information together and making sure they’re far apart from each other. This helps the computers learn better from the information they have. The paper tested FedPLCC on several sets of data and found that it worked better than other methods. |
Keywords
* Artificial intelligence * Clustering * Federated learning