Summary of Learning How Hard to Think: Input-adaptive Allocation Of Lm Computation, by Mehul Damani et al.
Learning How Hard to Think: Input-Adaptive Allocation of LM Computation
by Mehul Damani, Idan Shenfeld, Andi Peng, Andreea Bobu, Jacob Andreas
First submitted to arxiv on: 7 Oct 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI); Computation and Language (cs.CL)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper proposes an innovative approach to decoding language model (LM) outputs in various applications, including code generation, numerical reasoning, and dialog. Existing methods employ a single decoding procedure for all inputs, but this can be inefficient as not all inputs require the same level of computation. The authors introduce an adaptive decoding framework that predicts the distribution of rewards given an input and computation budget, allocating additional resources to inputs predicted to require more computation. Two decoding procedures are developed: an adaptive best-of-k procedure for generating samples and a routing procedure for selecting between accurate but expensive and cheaper but less capable decoding methods. Experimental results demonstrate that the proposed approach can reduce computation by up to 50% at no cost to response quality or improve quality by up to 10% at a fixed computational budget. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper is about making computers better at understanding what we want them to do, like generating code or answering math questions. Right now, computers use the same method for all tasks, but this can be slow and not always accurate. The researchers came up with a new way to decide how much time and effort to spend on each task based on how hard it is. They tested their idea on different types of problems and found that it can save up to half of the computer’s resources without sacrificing accuracy or improve accuracy by 10% while keeping the same amount of resources. |
Keywords
* Artificial intelligence * Language model