Loading Now

Summary of Online Intrinsic Rewards For Decision Making Agents From Large Language Model Feedback, by Qinqing Zheng et al.


Online Intrinsic Rewards for Decision Making Agents from Large Language Model Feedback

by Qinqing Zheng, Mikael Henaff, Amy Zhang, Aditya Grover, Brandon Amos

First submitted to arxiv on: 30 Oct 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Computation and Language (cs.CL); Robotics (cs.RO)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper proposes a novel architecture, called Oni, that addresses the limitations of existing approaches to synthesizing dense rewards from natural language descriptions in reinforcement learning. Oni simultaneously learns an RL policy and an intrinsic reward function using large language model (LLM) feedback, annotating the agent’s experience via an asynchronous LLM server. The approach explores various algorithmic choices for reward modeling, including hashing, classification, and ranking models, to shed light on questions regarding intrinsic reward design for sparse reward problems.
Low GrooveSquid.com (original content) Low Difficulty Summary
Oni is a new way to help artificial intelligence (AI) learn from language descriptions without needing a lot of data. This can be useful for things like playing games or solving puzzles that don’t have clear rewards. The paper shows how Oni can do this better than other approaches by learning both what actions to take and what rewards are important at the same time.

Keywords

» Artificial intelligence  » Classification  » Large language model  » Reinforcement learning