Loading Now

Summary of The Risk Of Federated Learning to Skew Fine-tuning Features and Underperform Out-of-distribution Robustness, by Mengyao Du et al.


The Risk of Federated Learning to Skew Fine-Tuning Features and Underperform Out-of-Distribution Robustness

by Mengyao Du, Miao Zhang, Yuwen Pu, Kai Xu, Shouling Ji, Quanjun Yin

First submitted to arxiv on: 25 Jan 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper investigates the use of federated learning in conjunction with fine-tuning to address dataset scarcity and privacy issues. However, it reveals that federated learning can skew fine-tuning features and compromise model robustness. To mitigate this, the authors introduce a robust algorithm called GNP (General Noisy Projection), which transfers robustness from the pre-trained model to the fine-tuned model while adding Gaussian noise to enhance representative capacity. The approach is demonstrated to be effective across various scenarios and datasets.
Low GrooveSquid.com (original content) Low Difficulty Summary
Federated learning helps with dataset scarcity, but it has a problem – it can make models less good at dealing with things that are different from what they learned on. To fix this, the researchers created a new way to train models called GNP (General Noisy Projection). It’s like taking a picture of the model and then adding some noise to help it be better at recognizing things it hasn’t seen before. They tested their method on many different datasets and found that it worked well in most cases.

Keywords

* Artificial intelligence  * Federated learning  * Fine tuning