Summary of Sure: Summarizing Retrievals Using Answer Candidates For Open-domain Qa Of Llms, by Jaehyung Kim et al.
SuRe: Summarizing Retrievals using Answer Candidates for Open-domain QA of LLMs
by Jaehyung Kim, Jaehyun Nam, Sangwoo Mo, Jongjin Park, Sang-Woo Lee, Minjoon Seo, Jung-Woo Ha, Jinwoo Shin
First submitted to arxiv on: 17 Apr 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper presents a novel framework for open-domain question answering (ODQA) with large language models (LLMs). The approach, called summarized retrieval (SuRe), enhances LLMs’ ability to predict accurate answers by leveraging the summaries of retrieved passages. SuRe first constructs summaries for each answer candidate and then confirms the most plausible answer based on summary validity and ranking. Experimental results demonstrate SuRe’s superiority over standard prompting approaches, with improvements in exact match (EM) and F1 score up to 4.6% and 4.0%, respectively. SuRe can be integrated with various retrieval methods and LLMs. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Large language models have made great progress in answering questions. This paper introduces a new way to help these models give better answers by making summaries of the information they find. The approach, called summarized retrieval (SuRe), helps models pick the best answer from a group of possibilities. SuRe makes a summary for each possibility and then picks the one that is most likely to be correct. This works really well, with improvements in accuracy up to 4.6% and 4.0%. The summaries also help measure how important different pieces of information are. |
Keywords
» Artificial intelligence » F1 score » Prompting » Question answering