Summary of Softqe: Learned Representations Of Queries Expanded by Llms, By Varad Pimpalkhute et al.
SoftQE: Learned Representations of Queries Expanded by LLMs
by Varad Pimpalkhute, John Heyer, Xusen Yin, Sameer Gupta
First submitted to arxiv on: 20 Feb 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Information Retrieval (cs.IR); Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper explores ways to integrate Large Language Models (LLMs) into query encoders for dense retrieval without increasing latency and cost. The authors propose a method called SoftQE, which maps embeddings of input queries to those of the LLM-expanded queries, effectively incorporating knowledge from LLMs during inference time. While the improvements on in-domain MS-MARCO metrics are modest, SoftQE outperforms strong baselines by 2.83 absolute percentage points on average across five out-of-domain BEIR tasks. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The researchers developed a way to use Large Language Models (LLMs) to help search engines find what people are looking for without slowing down or costing more money. They created a new method called SoftQE that lets computers understand queries better by matching them with information from the LLMs. This improves how well the system can find answers on different types of questions. |
Keywords
* Artificial intelligence * Inference