Loading Now

Summary of Compositional Conservatism: a Transductive Approach in Offline Reinforcement Learning, by Yeda Song et al.


Compositional Conservatism: A Transductive Approach in Offline Reinforcement Learning

by Yeda Song, Dongwook Lee, Gunhee Kim

First submitted to arxiv on: 6 Apr 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Robotics (cs.RO)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed COmpositional COnservatism with Anchor-seeking (COCOA) framework for offline reinforcement learning (RL) tackles the problem of distributional shifts by pursuing conservatism in a compositional manner. Building upon transductive reparameterization, COCOA decomposes input variables into anchors and differences, then seeks both in-distribution anchors and differences using learned reverse dynamics models. This approach encourages conservatism in the compositional input space for policies or value functions, independent of behavioral conservatism. The framework is applied to four state-of-the-art offline RL algorithms and evaluated on the D4RL benchmark, showing improved performance.
Low GrooveSquid.com (original content) Low Difficulty Summary
Offline reinforcement learning tries to learn from past experiences without new interactions. But this can be tricky because the things that happen during use might not match what was learned. One way to fix this is by being more careful in what we choose. This new approach, called COCOA, breaks down information into two parts – one part that stays the same and another part that changes. It then uses these parts to learn what’s safe and what’s not. By doing things this way, COCOA helps four different ways of doing offline RL work better on a special test dataset.

Keywords

* Artificial intelligence  * Reinforcement learning