Summary of Confidence Intervals and Simultaneous Confidence Bands Based on Deep Learning, by Asaf Ben Arie et al.
Confidence Intervals and Simultaneous Confidence Bands Based on Deep Learning
by Asaf Ben Arie, Malka Gorfine
First submitted to arxiv on: 20 Jun 2024
Categories
- Main: Machine Learning (stat.ML)
- Secondary: Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This research paper proposes a novel non-parametric bootstrap method to estimate prediction uncertainty in deep learning models, particularly for survival (time-to-event) data with right-censored outcomes. The authors highlight the limitations of existing methods, including Bayesian posterior credible intervals and frequentist confidence-interval estimation, which can yield invalid or overly conservative results. The proposed approach disentangles data uncertainty from optimization algorithm noise, ensuring accurate point-wise confidence intervals and simultaneous confidence bands. The method is demonstrated by constructing simultaneous confidence bands for survival curves derived from deep neural networks. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Deep learning models have made big improvements in predicting things like medical diagnosis and stock prices. But there’s an important problem: we don’t know how sure these predictions are. This can be a big deal, because if we’re not sure about our prediction, we might need to double-check or get more information. Right now, the best ways to estimate this uncertainty don’t work well for all kinds of data. In this research, scientists developed a new way to estimate uncertainty that works even when some outcomes are missing or “censored.” This new method can be used with any deep learning model and helps create accurate predictions. |
Keywords
* Artificial intelligence * Deep learning * Optimization