Loading Now

Summary of Discrete Distributions Are Learnable From Metastable Samples, by Abhijith Jayakumar et al.


Discrete distributions are learnable from metastable samples

by Abhijith Jayakumar, Andrey Y. Lokhov, Sidhant Misra, Marc Vuffray

First submitted to arxiv on: 17 Oct 2024

Categories

  • Main: Machine Learning (stat.ML)
  • Secondary: Statistical Mechanics (cond-mat.stat-mech); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper explores the properties of physically motivated stochastic dynamics, which are commonly used to sample high-dimensional distributions. However, such systems often get stuck in specific regions of their state space, leading to slow mixing and metastable behavior. The authors show that under minimal assumptions, the true model describing the stationary distribution can be recovered from samples produced by a metastable distribution for multi-variable discrete distributions. This is achieved through a fundamental observation that the single-variable conditionals of metastable distributions are close to those of the stationary distribution in terms of Kullback-Leibler divergence or total variation distance. The authors propose a conditional likelihood-based estimator to learn the true model even when samples come from a metastable distribution concentrated in a small region. They also provide explicit examples of metastable states and extend their results to binary pairwise undirected graphical models (Ising models) to recover the parameters of the energy function and structure of the model.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper is about using special kinds of computer simulations to understand complex systems. These simulations often get stuck in specific places, which can be a problem when trying to learn from them. The authors show that even if these simulations are “stuck,” they can still help us figure out what’s going on with the system overall. They do this by looking at smaller parts of the simulation and seeing how they relate to the bigger picture. This lets them develop new ways to use these simulations to learn about complex systems, like understanding how magnets work or how people behave in social networks.

Keywords

* Artificial intelligence  * Likelihood