Loading Now

Summary of Robust Offline Imitation Learning From Diverse Auxiliary Data, by Udita Ghosh et al.


Robust Offline Imitation Learning from Diverse Auxiliary Data

by Udita Ghosh, Dripta S. Raychaudhuri, Jiachen Li, Konstantinos Karydis, Amit K. Roy-Chowdhury

First submitted to arxiv on: 4 Oct 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Offline imitation learning enables policy learning from expert demonstrations without environment interaction. Recent approaches incorporate large numbers of auxiliary demonstrations to alleviate distribution shift issues. However, these rely on quality and composition assumptions that rarely hold true. To address this limitation, we propose Robust Offline Imitation from Diverse Auxiliary Data (ROIDA). ROIDA first identifies high-quality transitions using a learned reward function and combines them with expert demonstrations for weighted behavioral cloning. For lower-quality samples, ROIDA applies temporal difference learning to improve long-term returns. This two-pronged approach enables our framework to effectively leverage both high and low-quality data without assumptions. Extensive experiments validate that ROIDA achieves robust and consistent performance across multiple auxiliary datasets with diverse ratios of expert and non-expert demonstrations.
Low GrooveSquid.com (original content) Low Difficulty Summary
Imagine learning a new skill just by watching someone else do it, without having to practice yourself. That’s basically what offline imitation learning is – learning from examples. But sometimes, this approach can fail because the training data isn’t good enough. To fix this problem, we created a new way of combining expert demonstrations with other, lower-quality data. Our method, called ROIDA, first picks out the best parts of the extra data and combines them with the expert examples. Then, it uses that combination to learn a better policy. We tested our approach on many different datasets and found that it works really well, even when the extra data isn’t perfect.

Keywords

* Artificial intelligence