Summary of Glore: When, Where, and How to Improve Llm Reasoning Via Global and Local Refinements, by Alex Havrilla et al.
GLoRe: When, Where, and How to Improve LLM Reasoning via Global and Local Refinements
by Alex Havrilla, Sharath Raparthy, Christoforus Nalmpantis, Jane Dwivedi-Yu, Maksym Zhuravinskyi, Eric Hambro, Roberta Raileanu
First submitted to arxiv on: 13 Feb 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper presents a novel approach to refining reasoning in language models, leveraging synthetic data and outcome-based reward models. The authors propose Stepwise ORMs (SORMs), which are trained to predict the correctness of final answers by sampling the current policy multiple times. This approach outperforms traditional ORMs in detecting incorrect reasoning steps and improves downstream accuracy when doing refinements. To further refine solutions, the authors train global and local refinement models that take into account question and draft solution input, as well as critique feedback indicating the location of first errors. The combination of these strategies significantly improves model performance on the GSM8K dataset. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper helps language models get better at solving math and science problems by figuring out when to refine their answers. Current best models struggle to know when to refine without help, but this new approach uses synthetic data to train a special type of reward model called SORMs. These models are really good at detecting mistakes in reasoning and can improve the accuracy of language models like LLaMA-2 13B from 53% to 65%. |
Keywords
* Artificial intelligence * Llama * Synthetic data