Loading Now

Summary of Posterior Concentrations Of Fully-connected Bayesian Neural Networks with General Priors on the Weights, by Insung Kong and Yongdai Kim


Posterior concentrations of fully-connected Bayesian neural networks with general priors on the weights

by Insung Kong, Yongdai Kim

First submitted to arxiv on: 21 Mar 2024

Categories

  • Main: Machine Learning (stat.ML)
  • Secondary: Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Bayesian approaches for training deep neural networks (BNNs) have garnered significant interest and been effectively utilized in various applications. While several studies have explored the properties of posterior concentrations of BNNs, these investigations primarily focused on models with sparse or heavy-tailed priors. Notably, no theoretical results currently exist for BNNs using Gaussian priors, which are the most commonly employed type. The lack of theory stems from the absence of approximation results for Deep Neural Networks (DNNs) that are non-sparse and have bounded parameters. This paper presents a novel approximation theory for non-sparse DNNs with bounded parameters, and, based on this theory, demonstrates that BNNs with general priors can achieve near-minimax optimal posterior concentration rates to the true model.
Low GrooveSquid.com (original content) Low Difficulty Summary
Bayesian neural networks are special kinds of artificial intelligence models that use probability to make decisions. In this paper, researchers want to understand how these models work when they’re given a lot of information about what the answers should look like. They found that most studies only looked at cases where the model has very little information or very extreme information. But in real life, we usually have a mix of both. So, the researchers came up with a new way to understand how these models work when they’re given average amounts of information. This is important because it can help us make better predictions and decisions.

Keywords

* Artificial intelligence  * Probability