Summary of Odice: Revealing the Mystery Of Distribution Correction Estimation Via Orthogonal-gradient Update, by Liyuan Mao et al.
ODICE: Revealing the Mystery of Distribution Correction Estimation via Orthogonal-gradient Update
by Liyuan Mao, Haoran Xu, Weinan Zhang, Xianyuan Zhan
First submitted to arxiv on: 1 Feb 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary In this study, the researchers revisit Distribution Correction Estimation (DICE) methods in offline reinforcement learning (RL) and imitation learning (IL). They investigate why DICE-based methods that impose state-action-level behavior constraint typically underperform compared to state-of-the-art (SOTA) methods using only action-level behavior constraint. The team finds that there are two gradient terms when learning the value function: forward and backward gradients. Using the forward gradient resembles many offline RL methods, but adding the backward gradient can cancel out its effect if their directions conflict. To resolve this issue, the researchers propose a simple modification called orthogonal-gradient update, which projects the backward gradient onto the normal plane of the forward gradient. This new learning rule for DICE-based methods achieves SOTA performance and robustness in offline RL and IL tasks. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary DICE is a type of offline reinforcement learning that helps machines learn from past experiences without actually playing the game. The problem is that these methods don’t always work as well as expected. Researchers looked into why this might be happening and found two main issues. First, there are two kinds of “gradients” (or directions) when trying to learn what’s good or bad in a situation. Second, adding one type of gradient can actually make things worse if the other direction is different. To solve this problem, they came up with a new way to combine these gradients that works better. |
Keywords
* Artificial intelligence * Reinforcement learning