Summary of Unraveling Overoptimism and Publication Bias in Ml-driven Science, by Pouria Saidi et al.
Unraveling overoptimism and publication bias in ML-driven science
by Pouria Saidi, Gautam Dasarathy, Visar Berisha
First submitted to arxiv on: 23 May 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI); Computers and Society (cs.CY)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This research paper investigates the overoptimism in Machine Learning (ML) models and their reported performances. Recent studies have shown that published ML model performances are often inflated, with an inverse relationship between sample size and reported accuracy found. The study focuses on two key factors: overfitting and publication bias. A novel stochastic model is introduced to estimate observed accuracy, incorporating parametric learning curves and the aforementioned biases. Theoretical and empirical results show that this framework can correct for these biases in observed data, providing realistic performance assessments from published results. The paper also applies the model to meta-analyses of classifications of neurological conditions, estimating the inherent limits of ML-based prediction in each domain. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This study looks at why Machine Learning models are often reported as being better than they really are. It finds that there’s a problem when the amount of data used to train the model increases and the accuracy doesn’t get any better. The researchers want to know what’s causing this issue, so they focus on two things: overfitting (when a model becomes too good at fitting the training data) and publication bias (when researchers only share their best results). They come up with a new way of estimating how well a model will really perform, taking into account these biases. This helps to give a more accurate picture of how well Machine Learning can be used in different areas, like diagnosing neurological conditions. |
Keywords
» Artificial intelligence » Machine learning » Overfitting