Summary of Cost-efficient Knowledge-based Question Answering with Large Language Models, by Junnan Dong et al.
Cost-efficient Knowledge-based Question Answering with Large Language Models
by Junnan Dong, Qinggang Zhang, Chuang Zhou, Hao Chen, Daochen Zha, Xiao Huang
First submitted to arxiv on: 27 May 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary In this paper, researchers propose a novel cost-efficient strategy for knowledge-based question answering (KBQA) that combines large language models (LLMs) and prior small models on knowledge graphs (KGMs). The goal is to achieve better inferential accuracy while reducing costs. To do this, the authors formulate a multi-armed bandit problem and develop a tailored policy to minimize calls to LLMs within limited budgets. They also optimize a context-aware policy to distinguish expert models based on question semantics. The proposed strategy, called Coke, achieves superior performance in KBQA tasks, with up to 20.89% saving of GPT-4 fees while achieving a 2.74% higher accuracy on benchmark datasets. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary KBQA is used in many scenarios that require domain knowledge. Large language models can help, but they are expensive and lack domain-specific knowledge during pre-training. The authors combine LLMs with small models on knowledge graphs to improve accuracy and reduce costs. They propose a strategy called Coke that minimizes calls to LLMs within limited budgets. This helps achieve better results while saving money. |
Keywords
» Artificial intelligence » Gpt » Question answering » Semantics