Summary of Come: Test-time Adaption by Conservatively Minimizing Entropy, By Qingyang Zhang et al.
COME: Test-time adaption by Conservatively Minimizing Entropy
by Qingyang Zhang, Yatao Bian, Xinke Kong, Peilin Zhao, Changqing Zhang
First submitted to arxiv on: 12 Oct 2024
Categories
- Main: Machine Learning (stat.ML)
- Secondary: Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed Conservatively Minimize the Entropy (COME) method is a simple yet effective drop-in replacement for traditional entropy minimization (EM) in test-time adaption (TTA). By explicitly modeling uncertainty with Dirichlet prior distributions, COME regularizes models to favor conservative confidence on unreliable samples. This leads to improved optimization stability and enhanced performance in classification accuracy and uncertainty estimation under various settings, including standard, life-long, and open-world TTA. The method achieves state-of-the-art results on commonly used benchmarks, with improvements of up to 34.5% in accuracy and 15.1% in false positive rate. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Machine learning models need to adapt to new data distributions. A way to do this is by minimizing entropy (EM). However, this method has a problem: it can make the model too confident and collapse. To fix this, we created COME, which is a simple change to traditional EM. COME makes the model more careful when making predictions on uncertain samples. This helps the model learn better and make fewer mistakes. Our method works well in different situations, including standard and open-world TTA. |
Keywords
* Artificial intelligence * Classification * Machine learning * Optimization