Loading Now

Summary of Scalable Random Feature Latent Variable Models, by Ying Li et al.


Scalable Random Feature Latent Variable Models

by Ying Li, Zhidi Lin, Yuhao Liu, Michael Minyi Zhang, Pablo M. Olmos, Petar M. Djurić

First submitted to arxiv on: 23 Oct 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed Random Feature Latent Variable Models (RFLVMs) are a state-of-the-art approach to handling non-Gaussian likelihoods and uncovering patterns in high-dimensional data. However, their reliance on Monte Carlo sampling leads to scalability issues for large datasets. To address this, the authors develop a novel optimization-based variational Bayesian inference (VBI) algorithm, block coordinate descent variational inference (BCD-VI), which enables scalable RFLVMs (SRFLVM). The proposed method demonstrates improved performance in generating informative latent representations and imputing missing data across various real-world datasets, outperforming state-of-the-art competitors.
Low GrooveSquid.com (original content) Low Difficulty Summary
Random feature latent variable models are a way to understand patterns in big data. But they can be slow for really big datasets. To fix this, scientists created a new way to use variational Bayesian inference (VBI) with these models. This new method is called block coordinate descent variational inference (BCD-VI). It helps make the models faster and better at understanding patterns in data. The new method works well on real-world datasets and does better than other methods.

Keywords

» Artificial intelligence  » Bayesian inference  » Inference  » Optimization