Summary of Stochastic Gradient Piecewise Deterministic Monte Carlo Samplers, by Paul Fearnhead et al.
Stochastic Gradient Piecewise Deterministic Monte Carlo Samplers
by Paul Fearnhead, Sebastiano Grazzi, Chris Nemeth, Gareth O. Roberts
First submitted to arxiv on: 27 Jun 2024
Categories
- Main: Machine Learning (stat.ML)
- Secondary: Machine Learning (cs.LG); Computation (stat.CO)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper proposes a new approach to sampling from target distributions using Monte Carlo methods based on piecewise deterministic Markov processes (PDMPs) with sub-sampling. PDMPs are non-reversible continuous-time processes that can mix better than standard reversible MCMC samplers and incorporate exact sub-sampling schemes without introducing bias. However, the range of models for which PDMPs can be used is limited. To address this limitation, the authors propose an approximate simulation of PDMPs with sub-sampling using an Euler approximation to the true PDMP dynamics and an estimate of the gradient of the log-posterior based on a data sub-sample. This class of algorithms, called stochastic-gradient PDMPs, has continuous trajectories that can leverage recent ideas for sampling from measures with continuous and atomic components. The proposed methods are easy to implement, have low approximation error, and demonstrate similar efficiency to but more robustness than stochastic gradient Langevin dynamics. The authors present results on the performance of these algorithms on posterior distributions and show their applicability in various fields. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper finds a new way to make computers generate random numbers from certain types of target distributions. This is useful for machine learning, where we often need to simulate randomness to train models or make predictions. The authors use special types of processes called piecewise deterministic Markov processes (PDMPs) that can handle large datasets and are more efficient than traditional methods. The paper also introduces a new method called stochastic-gradient PDMPs, which is an approximation of the true PDMP process. This allows computers to generate random numbers from these target distributions using only a small subset of the data, making it much faster and more efficient. |
Keywords
* Artificial intelligence * Machine learning