Summary of Towards a Better Evaluation Of Out-of-domain Generalization, by Duhun Hwang et al.
Towards a Better Evaluation of Out-of-Domain Generalization
by Duhun Hwang, Suhyun Kang, Moonjung Eo, Jimyeong Kim, Wonjong Rhee
First submitted to arxiv on: 30 May 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Computer Vision and Pattern Recognition (cs.CV); Machine Learning (stat.ML)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This study focuses on improving Domain Generalization (DG) algorithms by investigating the limitations of existing evaluation measures. Specifically, it examines the average measure, which is commonly used to evaluate models and compare algorithms in DG studies. The authors argue that this measure has questionable suitability for approximating true domain generalization performance and propose an alternative worst+gap measure as a more robust solution. They provide theoretical justification for their proposal through two derived theorems and conduct extensive experiments using modified datasets, including SR-CMNIST, C-Cats&Dogs, L-CIFAR10, PACS-corrupted, and VLCS-corrupted. The results demonstrate that the average measure performs poorly in estimating true DG performance, whereas the worst+gap measure shows superior robustness. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This study wants to make it better for computers to learn about things they haven’t seen before. Right now, people are using a way to measure how well these computers do, but it’s not perfect. The researchers think this method is flawed and propose a new one that’s more reliable. They show why their new method is good by looking at math behind it and test it on different datasets they created. They found that the old method doesn’t work very well, while their new one does. |
Keywords
» Artificial intelligence » Domain generalization