Loading Now

Summary of Variational Bayesian Last Layers, by James Harrison et al.


Variational Bayesian Last Layers

by James Harrison, John Willes, Jasper Snoek

First submitted to arxiv on: 17 Apr 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Computer Vision and Pattern Recognition (cs.CV); Machine Learning (stat.ML)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper introduces a novel deterministic variational approach to train Bayesian last-layer neural networks, offering a sampling-free, single-pass model and loss that enhances uncertainty estimation. The proposed Variational Bayesian Last Layer (VBLL) can be trained and evaluated with nearly quadratic complexity in the last layer width, making it computationally efficient for standard architectures. Experimental results demonstrate improved predictive accuracy, calibration, and out-of-distribution detection over baselines in both regression and classification tasks. Furthermore, combining VBLL layers with variational Bayesian feature learning yields a lower-variance collapsed variational inference method for Bayesian neural networks.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper finds a way to make Bayesian last-layer neural networks work better. It does this by introducing a new approach that doesn’t need sampling and can be used on top of regular neural networks. The new method, called VBLL, helps predict things more accurately and makes it easier to detect when something is outside the normal range. The authors tested VBLL on different tasks and found that it works better than other methods in both predicting numbers and classifying things. They also showed that combining VBLL with another technique can make Bayesian neural networks even better.

Keywords

» Artificial intelligence  » Classification  » Inference  » Regression