Summary of Fedanchor: Enhancing Federated Semi-supervised Learning with Label Contrastive Loss For Unlabeled Clients, by Xinchi Qiu et al.
FedAnchor: Enhancing Federated Semi-Supervised Learning with Label Contrastive Loss for Unlabeled Clients
by Xinchi Qiu, Yan Gao, Lorenzo Sani, Heng Pan, Wanru Zhao, Pedro P. B. Gusmao, Mina Alibeigi, Alex Iacob, Nicholas D. Lane
First submitted to arxiv on: 15 Feb 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: None
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary A federated learning (FL) approach is proposed, addressing challenges in deploying FL in real-world applications. The innovation lies in introducing an anchor head structure paired with a classification head trained on labeled data on the server. This double-head architecture mitigates confirmation bias and overfitting issues associated with pseudo-labeling techniques. The method uses label contrastive loss based on cosine similarity metric, outperforming state-of-the-art methods on CIFAR10/100 and SVHN datasets in terms of convergence rate and model accuracy. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Federated learning is a way for devices to work together to train a shared model without sharing their data. This is important because sometimes we can’t get detailed labels, which makes training harder. To solve this problem, researchers came up with a new method called FedAnchor that uses an anchor head and classification head. The anchor head helps us avoid mistakes when we’re not sure what’s in the data. By using label contrastive loss, we can train our model more accurately. This method did better than others on some important datasets. |
Keywords
* Artificial intelligence * Classification * Contrastive loss * Cosine similarity * Federated learning * Overfitting