Summary of Understanding Domain Generalization: a Noise Robustness Perspective, by Rui Qiao et al.
Understanding Domain Generalization: A Noise Robustness Perspective
by Rui Qiao, Bryan Kian Hsiang Low
First submitted to arxiv on: 26 Jan 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Computer Vision and Pattern Recognition (cs.CV)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary Machine learning algorithms for domain generalization (DG) have been rapidly developed, but it’s unclear if they outperform classic empirical risk minimization (ERM) across standard benchmarks. Our research investigates whether DG algorithms have benefits over ERM by analyzing the impact of label noise. We find that label noise exacerbates spurious correlations in ERM, leading to poor generalization. In contrast, DG algorithms exhibit implicit robustness against label noise during finite-sample training, mitigating spurious correlations and improving generalization. However, when tested on real-world datasets, DG algorithms do not necessarily outperform ERM. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Machine learning is a way for computers to learn from data without being explicitly programmed. Some machine learning models are good at one thing, but not good at another. Our research looks at how well these models work in different situations. We found that when the model is given noisy or incorrect labels, it can actually get worse at generalizing and making predictions. But some models have a special property that makes them more robust to this noise. This means they can still make good predictions even when the data is not perfect. |
Keywords
* Artificial intelligence * Domain generalization * Generalization * Machine learning