Summary of Utilizing Maximum Mean Discrepancy Barycenter For Propagating the Uncertainty Of Value Functions in Reinforcement Learning, by Srinjoy Roy et al.
Utilizing Maximum Mean Discrepancy Barycenter for Propagating the Uncertainty of Value Functions in Reinforcement Learning
by Srinjoy Roy, Swagatam Das
First submitted to arxiv on: 31 Mar 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: None
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary In this paper, researchers propose a novel Reinforcement Learning (RL) algorithm called Maximum Mean Discrepancy Q-Learning (MMD-QL), which improves upon Wasserstein Q-Learning (WQL) by accounting for the uncertainty of value functions. MMD-QL uses the MMD barycenter to propagate uncertainty during Temporal Difference (TD) updates, providing a tighter estimate of closeness between probability measures than the Wasserstein distance. The authors prove that MMD-QL is Probably Approximately Correct in MDPs under the average loss metric and demonstrate its superiority over WQL and other algorithms on tabular environments. Additionally, they extend MMD-QL to deep networks, creating the MMD Q-Network (MMD-QN), which is shown to perform well compared to benchmark deep RL algorithms on challenging Atari games. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper introduces a new algorithm for Reinforcement Learning that helps robots learn better by considering uncertainty. The researchers created an algorithm called Maximum Mean Discrepancy Q-Learning (MMD-QL) that works better than others because it takes into account the uncertainty of values. They also showed that MMD-QL can be used with deep learning networks, which is important for handling big problems. |
Keywords
* Artificial intelligence * Deep learning * Probability * Reinforcement learning