Loading Now

Summary of Entropy Regularized Task Representation Learning For Offline Meta-reinforcement Learning, by Mohammadreza Nakhaei et al.


Entropy Regularized Task Representation Learning for Offline Meta-Reinforcement Learning

by Mohammadreza Nakhaei, Aidan Scannell, Joni Pajarinen

First submitted to arxiv on: 19 Dec 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed offline meta-reinforcement learning framework enables agents to rapidly adapt to new tasks by training on data from a set of different tasks. Context-based approaches utilize a history of state-action-reward transitions, referred to as the context, to infer representations of the current task and condition the agent on the task representations. However, these approaches suffer from distribution mismatch, causing overfitting to offline training data. To address this issue, the authors approximately minimize the mutual information between the task representations and behavior policy by maximizing the entropy of the behavior policy conditioned on the task representations. The approach is validated in MuJoCo environments, demonstrating improved performance in both in-distribution and out-of-distribution tasks compared to baselines.
Low GrooveSquid.com (original content) Low Difficulty Summary
Offline meta-reinforcement learning helps robots quickly learn new tasks by training them on data from different tasks. Current methods use a “memory” of what happened before to understand the current task. However, this memory doesn’t always match what happens at test time, so they overfit to the training data. The authors fixed this problem by making the behavior policy (how the agent behaves) more random when given information about the task. This approach works better than previous methods in simulations.

Keywords

» Artificial intelligence  » Overfitting  » Reinforcement learning