Summary of Overcoming Label Shift in Targeted Federated Learning, by Edvin Listo Zec and Adam Breitholtz and Fredrik D. Johansson
Overcoming label shift in targeted federated learning
by Edvin Listo Zec, Adam Breitholtz, Fredrik D. Johansson
First submitted to arxiv on: 6 Nov 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary A novel model aggregation scheme called FedPALS is proposed to address label shift problems in federated learning, which occur when the label distributions differ across clients or between clients and the target domain. Existing algorithms assume shared label distributions, but this assumption is often violated in real-world scenarios, leading to significant degradation of model performance. FedPALS adapts to label shifts by leveraging knowledge of the target label distribution at the central server, ensuring unbiased updates under stochastic gradient descent and robust generalization across clients with diverse data. Experimental results on image classification demonstrate that FedPALS consistently outperforms standard baselines by aligning model aggregation with the target domain. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Federated learning lets many people work together to train models without sharing personal information. This is great for scaling machine learning, but it’s not perfect. Sometimes, the labels (like “dog” or “cat”) are different between the people training the model and the final goal. This can make the model not work well. A new way of combining models called FedPALS helps solve this problem by using information about the target label distribution at the central server. It’s like a special adapter that makes sure the model is accurate and works well across different situations. |
Keywords
» Artificial intelligence » Federated learning » Generalization » Image classification » Machine learning » Stochastic gradient descent