Summary of Extracting Heuristics From Large Language Models For Reward Shaping in Reinforcement Learning, by Siddhant Bhambri et al.
Extracting Heuristics from Large Language Models for Reward Shaping in Reinforcement Learning
by Siddhant Bhambri, Amrita Bhattacharjee, Durgesh Kalwar, Lin Guan, Huan Liu, Subbarao Kambhampati
First submitted to arxiv on: 24 May 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper investigates ways to improve Reinforcement Learning (RL) agents’ sample efficiency in sparse reward domains with stochastic transitions. One approach is reward shaping, which introduces intrinsic rewards to help RL agents converge faster. However, designing a useful reward shaping function for all desirable states in the Markov Decision Process (MDP) is challenging. The authors propose leveraging Large Language Models (LLMs) to generate heuristics for constructing a reward shaping function that boosts RL agents’ sample efficiency. They use off-the-shelf LLMs to generate plans for MDP abstractions and analyze the quality of these heuristics in multiple domains, including BabyAI, Household, Mario, and Minecraft. The results show significant improvements in PPO, A2C, and Q-learning when guided by LLM-generated heuristics. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper tries to make Reinforcement Learning (RL) better by using special computer programs called Large Language Models (LLMs). RL is a way for computers to learn from experience. The problem is that it can take a long time for the computer to figure things out, especially when there aren’t many rewards or the situation changes often. The authors want to know if LLMs can help make RL faster and more efficient. They used LLMs to create plans for different scenarios and tested how well they worked in various games and environments. The results show that using LLMs can really help make RL better! |
Keywords
» Artificial intelligence » Reinforcement learning