Loading Now

Summary of Impact Of Noisy Supervision in Foundation Model Learning, by Hao Chen et al.


Impact of Noisy Supervision in Foundation Model Learning

by Hao Chen, Zihan Wang, Ran Tao, Hongxin Wei, Xing Xie, Masashi Sugiyama, Bhiksha Raj, Jindong Wang

First submitted to arxiv on: 11 Mar 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Computation and Language (cs.CL); Computer Vision and Pattern Recognition (cs.CV)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper presents a comprehensive study on the impact of label noise in large-scale datasets used for pre-training foundation models. The authors analyze how slight noise in pre-training can benefit in-domain performance but always deteriorates out-of-domain performance. They demonstrate that this is due to the pre-training noise shaping the feature space differently, leading to poor generalization. To mitigate these effects, the authors propose a tuning method called NMTune, which affirms the feature space to improve generalization. The paper evaluates the proposed approach on various vision and language models using realistic noisy data.
Low GrooveSquid.com (original content) Low Difficulty Summary
This study looks at how noise in pre-training datasets affects how well foundation models work. Researchers found that small amounts of noise can actually help when training a model for a specific task, but it always makes it worse for tasks outside of that domain. They think this is because the noise changes how the model’s features are arranged, making it harder to generalize. To fix this problem, they came up with a new way to tune models called NMTune, which helps by changing the feature space. They tested their approach on different types of models and found it worked well.

Keywords

* Artificial intelligence  * Generalization