Loading Now

Summary of Provably Scalable Black-box Variational Inference with Structured Variational Families, by Joohwan Ko et al.


Provably Scalable Black-Box Variational Inference with Structured Variational Families

by Joohwan Ko, Kyurae Kim, Woo Chang Kim, Jacob R. Gardner

First submitted to arxiv on: 19 Jan 2024

Categories

  • Main: Machine Learning (stat.ML)
  • Secondary: Machine Learning (cs.LG); Computation (stat.CO)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper addresses the limitations of full-rank variational families in black-box variational inference (BBVI), which are particularly crucial for hierarchical Bayesian models. Empirical and theoretical studies have shown that full-rank approaches scale poorly with problem dimensionality, leading to an iteration complexity of O(N^2) dependent on dataset size N. The authors introduce structured variational families as a middle ground between mean-field and full-rank approaches. They prove that certain scale matrix structures can achieve better scaling with O(N), verifying this through empirical experiments on large-scale hierarchical models.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper helps solve a problem in machine learning. Right now, some methods don’t work well when dealing with very big datasets. This is important because many models get bigger as they learn from more data. The authors found a way to make these methods work better by introducing “structured variational families”. They showed that this approach can handle large datasets more efficiently than before, making it useful for machine learning researchers and practitioners.

Keywords

* Artificial intelligence  * Inference  * Machine learning