Summary of Adaptive Inference-time Compute: Llms Can Predict If They Can Do Better, Even Mid-generation, by Rohin Manvi et al.
Adaptive Inference-Time Compute: LLMs Can Predict if They Can Do Better, Even Mid-Generation
by Rohin Manvi, Anikait Singh, Stefano Ermon
First submitted to arxiv on: 3 Oct 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper introduces a novel generative self-evaluation scheme to enhance the performance of large language models (LLMs). The proposed approach reduces the computational cost by adaptively pruning unpromising samples early on, selecting the best sample, or restarting generation when necessary. This is achieved through a generative reward model formulation that predicts the probability of improving responses mid-generation. The LLM uses this prediction to decide whether to generate more samples, prune existing ones, or pick the best one. This capability is computationally inexpensive as it only requires generating a single predefined token. The authors train their model using real user prompts from LMSYS and achieve significant improvements in performance on AlpacaEval (34% win rate against GPT-4) and GSM8K (91% math performance). By sampling adaptively, they demonstrate that 74% of the improvement can be achieved with only 1.2 samples on average. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper helps make computers better at understanding human language. It’s like a game where the computer tries to come up with good answers and then decides whether it needs to try again or pick the best one. This makes the computer more efficient and can even reduce the amount of work it has to do. The authors tested their idea using real examples of what people ask computers, and they were able to make the computer better at answering questions about math and other topics. |
Keywords
» Artificial intelligence » Gpt » Probability » Pruning » Token