Summary of Decision-focused Evaluation Of Worst-case Distribution Shift, by Kevin Ren et al.
Decision-Focused Evaluation of Worst-Case Distribution Shift
by Kevin Ren, Yewon Byun, Bryan Wilder
First submitted to arxiv on: 4 Jul 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Machine Learning (stat.ML)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary In this paper, researchers tackle the challenge of identifying potentially harmful distribution shifts in predictive models before deployment. While previous work focused on individual-level accuracy, the authors argue that this may not be sufficient for downstream population-level decisions. They introduce a hierarchical model structure to identify worst-case shifts in predictive resource allocation settings, capturing interactions between instances. By reformulating the problem as a submodular optimization problem, they develop efficient approximations of worst-case loss. The framework is applied to real data, revealing that different metrics can identify distinct worst-case distributions. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper tries to solve a big problem in artificial intelligence. When machines make predictions, sometimes things change and the predictions don’t work well anymore. This is called a “distribution shift.” Usually, people try to find the biggest problems with individual predictions, but that’s not enough for some tasks. Imagine you have a limited resource, like food or medicine, and you need to decide who gets it. You want to make sure the person who really needs it gets it. The authors create a new way to think about distribution shifts in this kind of situation, taking into account how things interact with each other. They test their idea on real data and find that different ways of measuring problems can give different answers. |
Keywords
* Artificial intelligence * Optimization