Loading Now

Summary of Training Large Language Models For Reasoning Through Reverse Curriculum Reinforcement Learning, by Zhiheng Xi et al.


Training Large Language Models for Reasoning through Reverse Curriculum Reinforcement Learning

by Zhiheng Xi, Wenxiang Chen, Boyang Hong, Senjie Jin, Rui Zheng, Wei He, Yiwen Ding, Shichun Liu, Xin Guo, Junzhe Wang, Honglin Guo, Wei Shen, Xiaoran Fan, Yuhao Zhou, Shihan Dou, Xiao Wang, Xinbo Zhang, Peng Sun, Tao Gui, Qi Zhang, Xuanjing Huang

First submitted to arxiv on: 8 Feb 2024

Categories

  • Main: Artificial Intelligence (cs.AI)
  • Secondary: Computation and Language (cs.CL); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper proposes a novel method called R^3, which uses outcome supervision to achieve benefits similar to process supervision for large language models. The authors identify the limitations of current RL approaches, including sparse rewards and manual annotation requirements. To overcome these limitations, R^3 learns from correct demonstrations by progressively sliding the start state of reasoning, creating a step-wise curriculum that allows outcome supervision to offer precise error pinpointing. The method surpasses RL baselines on eight reasoning tasks by 4.1 points on average, with notable improvements in program-based reasoning on GSM8K. The authors demonstrate the effectiveness of R^3 using Llama2-7B and Codellama-7B + models.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper introduces a new way to teach large language models called R^3. It helps these models learn by providing feedback on what they do right, rather than just telling them when they’re wrong. The authors show that this approach is better than others because it allows the model to explore and learn from mistakes more easily. They tested their method on eight different reasoning tasks and found that it performed 4.1 points better than other methods on average. This new way of teaching language models could lead to better results in areas like programming and text understanding.

Keywords

* Artificial intelligence