Loading Now

Summary of Transfer Operators From Batches Of Unpaired Points Via Entropic Transport Kernels, by Florian Beier et al.


Transfer Operators from Batches of Unpaired Points via Entropic Transport Kernels

by Florian Beier, Hancheng Bi, Clément Sarrazin, Bernhard Schmitzer, Gabriele Steidl

First submitted to arxiv on: 13 Feb 2024

Categories

  • Main: Machine Learning (stat.ML)
  • Secondary: Machine Learning (cs.LG); Dynamical Systems (math.DS)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper tackles the challenge of estimating the joint probability distribution of two random variables given a set of independent observation blocks. The twist is that the internal ordering of samples within each block is unknown, making it harder to infer the true density. To overcome this obstacle, the authors propose a maximum-likelihood inference functional and a computationally tractable approximation. They also derive a Gamma-convergence result showing that as the number of blocks increases, empirical approximations can be used to recover the true density. The paper presents a novel approach using entropic optimal transport kernels to model hypothesis spaces for density functions, which enables approximate inference of transfer operators from data.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper tries to figure out how likely two things are to happen together, given some information we don’t really understand. Imagine you have lots of boxes with unknown labels inside, and each box has several items that might be related in some way. The goal is to guess what’s going on by looking at the boxes without knowing what’s inside or how they’re labeled. To do this, the authors come up with a new way to combine information from many small pieces into one big picture. They show that as we get more of these small pieces, we can start to understand what’s really going on and make good predictions.

Keywords

* Artificial intelligence  * Inference  * Likelihood  * Probability