Loading Now

Summary of Re-evaluating Group Robustness Via Adaptive Class-specific Scaling, by Seonguk Seo et al.


Re-evaluating Group Robustness via Adaptive Class-Specific Scaling

by Seonguk Seo, Bohyung Han

First submitted to arxiv on: 19 Dec 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Group distributionally robust optimization is an algorithm that aims to improve worst-group and unbiased accuracies by mitigating spurious correlations and addressing dataset bias. Existing approaches have achieved improvements in robust accuracies, but often at the cost of average accuracy due to inherent trade-offs. To control this trade-off flexibly and efficiently, we propose a simple class-specific scaling strategy that can be directly applied to existing debiasing algorithms with no additional training. We also develop an instance-wise adaptive scaling technique that alleviates this trade-off, leading to improvements in both robust and average accuracies. Our approach reveals that a naive ERM baseline matches or outperforms recent debiasing methods by adopting the class-specific scaling technique. Additionally, we introduce a novel unified metric that quantifies the trade-off between the two accuracies as a scalar value, allowing for comprehensive evaluation of existing algorithms. By tackling the inherent trade-off and offering a performance landscape, our approach provides valuable insights into robust techniques beyond just robust accuracy.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about making computer models more fair and accurate by reducing biases in data. Existing methods have improved fairness, but often at the cost of overall performance. The authors propose two new ways to balance fairness and accuracy: one that adjusts model settings based on individual classes and another that adapts to specific instances. Surprisingly, a simple baseline method outperforms more complex ones when using these adjustments. The paper also introduces a new way to measure how well models perform in terms of fairness and accuracy. By understanding the trade-offs between these two goals, researchers can develop better models for real-world applications.

Keywords

» Artificial intelligence  » Optimization