Summary of Are More Llm Calls All You Need? Towards Scaling Laws Of Compound Inference Systems, by Lingjiao Chen and Jared Quincy Davis and Boris Hanin and Peter Bailis and Ion Stoica and Matei Zaharia and James Zou
Are More LLM Calls All You Need? Towards Scaling Laws of Compound Inference Systems
by Lingjiao Chen, Jared Quincy Davis, Boris Hanin, Peter Bailis, Ion Stoica, Matei Zaharia, James Zou
First submitted to arxiv on: 4 Mar 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI); Computation and Language (cs.CL); Systems and Control (eess.SY)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary A compound system that aggregates language model responses using majority voting can exhibit non-monotonic behavior, with performance initially increasing and then decreasing as the number of language model calls increases. This phenomenon is attributed to the diversity of query difficulties within a task, where more language model calls improve performance on easier queries but worsen it on harder ones. The study provides theoretical insights into this behavior, deriving an analytical scaling model that can accurately predict the performance of vote-based and filter-vote systems. These findings enable the computation of the optimal number of language model calls for maximizing system performance. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Compound language models are used to answer questions by making multiple language model calls and taking a majority vote. But how does this affect the system’s performance? Researchers found that the more language model calls, the better it performs on easy questions, but worse on hard ones. This led them to develop an analytical scaling model that can predict the optimal number of language model calls for best results. |
Keywords
* Artificial intelligence * Language model