Loading Now

Summary of Parameter Uncertainties For Imperfect Surrogate Models in the Low-noise Regime, by Thomas D Swinburne and Danny Perez


Parameter uncertainties for imperfect surrogate models in the low-noise regime

by Thomas D Swinburne, Danny Perez

First submitted to arxiv on: 2 Feb 2024

Categories

  • Main: Machine Learning (stat.ML)
  • Secondary: Machine Learning (cs.LG); Data Analysis, Statistics and Probability (physics.data-an)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Bayesian regression determines model parameters by minimizing the expected loss, which is an upper bound to the true generalization error. However, this approach ignores misspecification, where models are imperfect. This leads to significantly underestimated parameter uncertainties that vanish in the large data limit. The issue is particularly problematic when building models of low-noise or near-deterministic calculations, as the main source of uncertainty is neglected. In this paper, we analyze the generalization error of misspecified, near-deterministic surrogate models, a regime of broad relevance in science and engineering. We show that posterior distributions must cover every training point to avoid a divergent generalization error and design an ansatz that respects this constraint, which incurs minimal overhead for linear models. Our efficient scheme gives accurate prediction and bounding of test errors where existing schemes fail, allowing this important source of uncertainty to be incorporated in computational workflows.
Low GrooveSquid.com (original content) Low Difficulty Summary
Bayesian regression is a way to figure out model parameters by minimizing the expected loss. But this approach doesn’t take into account when the model is wrong. This leads to big mistakes in understanding how uncertain the model’s predictions are. When we’re building models for things that happen almost exactly the same every time, like calculations in science and engineering, this mistake is really important to fix. In this paper, we look at what happens when our models are imperfect and show that if they’re wrong, they need to cover all the data points to make good predictions. We also design a way to do this efficiently and show it works on big datasets.

Keywords

* Artificial intelligence  * Generalization  * Regression