Summary of Optimal Downsampling For Imbalanced Classification with Generalized Linear Models, by Yan Chen et al.
Optimal Downsampling for Imbalanced Classification with Generalized Linear Models
by Yan Chen, Jose Blanchet, Krzysztof Dembczynski, Laura Fee Nern, Aaron Flores
First submitted to arxiv on: 11 Oct 2024
Categories
- Main: Machine Learning (stat.ML)
- Secondary: Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed method for optimal downsampling in imbalanced classification models uses generalized linear models (GLMs) to balance statistical accuracy and computational efficiency. A pseudo maximum likelihood estimator is developed, with asymptotic normality guarantees for increasingly imbalanced populations. The optimal downsampling rate is computed using a criterion that balances these factors. Numerical experiments on synthetic and empirical data validate the theoretical results and show the proposed method outperforms alternatives. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper looks at how to make big datasets more balanced when you have an imbalance in the types of things being classified. It uses special math called generalized linear models (GLMs) to figure this out. They come up with a new way to estimate things, which gets better as the dataset gets bigger and more imbalanced. They also find the best rate for getting rid of some of the data to make it easier to work with. The tests they do show that their method works better than other ways people have tried. |
Keywords
* Artificial intelligence * Classification * Likelihood