Summary of Reinforcement Learning From Imperfect Corrective Actions and Proxy Rewards, by Zhaohui Jiang et al.
Reinforcement Learning From Imperfect Corrective Actions And Proxy Rewards
by Zhaohui Jiang, Xuening Feng, Paul Weng, Yifei Zhu, Yan Song, Tianze Zhou, Yujing Hu, Tangjie Lv, Changjie Fan
First submitted to arxiv on: 8 Oct 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: None
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary In a novel approach to reinforcement learning (RL), researchers propose a framework that incorporates human feedback in the form of corrective actions to improve the alignment between learned policies and human preferences. The Iterative Learning from Corrective Actions and Proxy Rewards (ICoPro) algorithm cycles through three phases: soliciting sparse corrective actions, incorporating them into the Q-function using a margin loss, and training the agent with standard RL losses regularized with a margin loss. Additionally, pseudo-labels are integrated to reduce human labor and stabilize training. The authors experimentally validate their approach on Atari games and autonomous driving tasks, demonstrating improved sample efficiency and alignment with human preferences. Moreover, the method can overcome non-optimality in corrective actions thanks to proxy rewards. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary In a breakthrough in reinforcement learning (RL), scientists have developed a new way for machines to learn from humans. They’ve created an algorithm that asks humans to correct mistakes made by the machine, and then uses those corrections to improve its decision-making. This approach is called Iterative Learning from Corrective Actions and Proxy Rewards (ICoPro). It works by having humans provide feedback on the machine’s actions, which helps the machine learn what choices are good or bad. The researchers tested their method on video games and autonomous driving tasks, and found that it was more efficient and accurate than previous methods. |
Keywords
» Artificial intelligence » Alignment » Reinforcement learning