Loading Now

Summary of Tree Bandits For Generative Bayes, by Sean O’hagan et al.


Tree Bandits for Generative Bayes

by Sean O’Hagan, Jungeum Kim, Veronika Rockova

First submitted to arxiv on: 16 Apr 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Computation (stat.CO); Methodology (stat.ME)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper proposes a novel framework to accelerate Approximate Bayesian Computation (ABC) rejection sampling for generative models with obscured likelihood. By developing a self-aware framework that learns from past trials and errors, the method uses recursive partitioning classifiers on the ABC lookup table to sequentially refine high-likelihood regions into boxes. Each box is treated as an arm in a binary bandit problem, where the proclivity for being chosen for the next ABC evaluation depends on the prior distribution and past rejections. The approach places more splits in areas with high likelihood and shies away from low-probability regions destined for ABC rejections. The method provides two versions: ABC-Tree for posterior sampling and ABC-MAP for maximum a posteriori estimation. The paper demonstrates accurate ABC approximability at much lower simulation cost, justifying the use of tree-based bandit algorithms with nearly optimal regret bounds. Finally, the approach is successfully applied to masked image classification using deep generative models.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper develops a new way to make Approximate Bayesian Computation (ABC) faster and more accurate. ABC is used when we can’t directly measure how well a model fits the data, but we have some idea of what the good fit looks like. The method uses a special kind of decision-making algorithm called a bandit algorithm, which chooses between different options based on their past performance. By using this algorithm to refine where in the ABC process to focus our attention, the method can make accurate predictions at much lower computational cost than before. This has important implications for fields like computer vision and natural language processing.

Keywords

» Artificial intelligence  » Attention  » Image classification  » Likelihood  » Natural language processing  » Probability