Loading Now

Summary of Detecting Model Misspecification in Amortized Bayesian Inference with Neural Networks: An Extended Investigation, by Marvin Schmitt et al.


Detecting Model Misspecification in Amortized Bayesian Inference with Neural Networks: An Extended Investigation

by Marvin Schmitt, Paul-Christian Bürkner, Ullrich Köthe, Stefan T. Radev

First submitted to arxiv on: 5 Jun 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
A recent breakthrough in probabilistic deep learning has enabled efficient Bayesian inference in scenarios where the likelihood function is implicitly defined by a simulation program. However, it’s unclear how accurate this inference remains if the simulation doesn’t perfectly represent reality. This paper identifies and investigates types of model misspecification that can occur when using neural posterior approximators for inference. It proposes an unsupervised measure to detect model misspecification at test time, which is trained without access to training data from the true distribution. The proposed approach is demonstrated on various scientific tasks, including cell biology, cognitive decision making, disease outbreak dynamics, and computer vision. This research has implications for users of neural posterior approximators, who can now be alerted when predictions are not trustworthy.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper looks at how well a new way to do Bayesian inference works if the simulation used to train it doesn’t perfectly match reality. It finds that even small differences between the simulation and real life can make the results less accurate. To solve this problem, the authors suggest a new way to measure when a model is misspecified. They test their approach on some examples and show how it can help identify when predictions are not trustworthy.

Keywords

» Artificial intelligence  » Bayesian inference  » Deep learning  » Inference  » Likelihood  » Unsupervised