Loading Now

Summary of Learning From Sparse Offline Datasets Via Conservative Density Estimation, by Zhepeng Cen et al.


Learning from Sparse Offline Datasets via Conservative Density Estimation

by Zhepeng Cen, Zuxin Liu, Zitong Wang, Yihang Yao, Henry Lam, Ding Zhao

First submitted to arxiv on: 16 Jan 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Offline reinforcement learning (RL) offers a promising direction for learning policies from pre-collected datasets without requiring further interactions with the environment. The existing methods struggle to handle out-of-distribution (OOD) extrapolation errors, especially in sparse reward or scarce data settings. To address this challenge, we propose Conservative Density Estimation (CDE), a novel training algorithm that explicitly imposes constraints on the state-action occupancy stationary distribution. CDE overcomes the limitations of existing approaches by addressing the support mismatch issue in marginal importance sampling. Our method achieves state-of-the-art performance on the D4RL benchmark and consistently outperforms baselines in challenging tasks with sparse rewards or insufficient data, demonstrating the advantages of our approach in addressing the extrapolation error problem in offline RL.
Low GrooveSquid.com (original content) Low Difficulty Summary
Offline reinforcement learning is a way to learn policies from pre-collected datasets without needing more interactions with the environment. But existing methods struggle when they’re not shown data that’s similar to what they learned from. To solve this, we created a new training method called Conservative Density Estimation (CDE). CDE helps by making sure the state-action occupancy stationary distribution is correct. This makes our approach better than others because it handles support mismatch in marginal importance sampling. Our method did really well on the D4RL benchmark and worked better than other methods when there was little data or rewards were sparse.

Keywords

* Artificial intelligence  * Density estimation  * Reinforcement learning