Loading Now

Summary of Uncertainty Quantification Via Stable Distribution Propagation, by Felix Petersen et al.


Uncertainty Quantification via Stable Distribution Propagation

by Felix Petersen, Aashwin Mishra, Hilde Kuehne, Christian Borgelt, Oliver Deussen, Mikhail Yurochkin

First submitted to arxiv on: 13 Feb 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
As machine learning practitioners seek to improve their models’ uncertainty quantification capabilities, a new approach for propagating stable probability distributions through neural networks has been proposed. This local linearization-based method is shown to be an optimal approximation in terms of total variation distance for the ReLU non-linearity, allowing Gaussian and Cauchy input uncertainties to be propagated through neural networks to quantify output uncertainties. The utility of this approach is demonstrated by applying it to predicting calibrated confidence intervals and selective prediction on out-of-distribution data.
Low GrooveSquid.com (original content) Low Difficulty Summary
A new way has been found to make neural networks better at understanding how uncertain their answers are. This is important because right now, neural networks can be very good at some things, but they’re not always sure about what they’re doing. The new method uses something called local linearization, which helps make the numbers inside the network more stable and predictable. This means that we can use it to figure out how uncertain the answers are going to be. It’s a big deal because this will help us trust neural networks more when they’re making predictions.

Keywords

* Artificial intelligence  * Machine learning  * Probability  * Relu