Summary of Stream Of Search (sos): Learning to Search in Language, by Kanishk Gandhi et al.
Stream of Search (SoS): Learning to Search in Language
by Kanishk Gandhi, Denise Lee, Gabriel Grand, Muxin Liu, Winson Cheng, Archit Sharma, Noah D. Goodman
First submitted to arxiv on: 1 Apr 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI); Computation and Language (cs.CL)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed paper introduces a novel approach to teaching language models to search for solutions by representing the process as a flattened string – a stream of search (SoS). This unified language captures various symbolic search strategies and is pre-trained on a dataset of SoS generated by heuristic solvers. The model’s performance improves by 25% compared to traditional training methods, and further finetuning with policy improvement methods like APA and STaR leads to increased problem-solving capabilities, including the discovery of new solutions. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Language models are not good at making mistakes while they learn. This makes it hard for them to predict what will happen if they do something wrong a few steps ahead. Researchers found a way to teach language models to search for answers by treating the process as a special kind of string. They tested this approach on a math game called Countdown, where you have to combine numbers using simple math operations to get a target number. The trained model was 25% better at finding solutions than usual. Then, they made it even better by teaching it two new tricks: APA and STaR. With these tricks, the model could solve problems that no other solver could do. |