Loading Now

Summary of Pessimistic Value Iteration For Multi-task Data Sharing in Offline Reinforcement Learning, by Chenjia Bai et al.


Pessimistic Value Iteration for Multi-Task Data Sharing in Offline Reinforcement Learning

by Chenjia Bai, Lingxiao Wang, Jianye Hao, Zhuoran Yang, Bin Zhao, Zhen Wang, Xuelong Li

First submitted to arxiv on: 30 Apr 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper proposes an uncertainty-based approach to Multi-Task Data Sharing (MTDS) in Offline Reinforcement Learning (RL). The method shares entire datasets without data selection, using ensemble-based uncertainty quantification and pessimistic value iteration. This unified framework addresses single- and multi-task offline RL challenges. Theoretical analysis shows that the optimality gap is related only to the expected data coverage of the shared dataset, resolving the distribution shift issue in data sharing. Experimental results on three challenging domains demonstrate that the algorithm outperforms previous state-of-the-art methods.
Low GrooveSquid.com (original content) Low Difficulty Summary
Offline Reinforcement Learning (RL) can learn a task-specific policy from a fixed dataset. But what if the dataset is limited? One idea is to share datasets from other tasks, called Multi-Task Data Sharing (MTDS). However, this makes the data different and hard to use. To fix this, the paper suggests sharing the entire dataset without picking certain parts. This helps with both single-task and multi-task offline RL. The researchers also prove that their method works well and show it beats other methods on challenging tasks.

Keywords

» Artificial intelligence  » Multi task  » Reinforcement learning