Summary of Rag-qa Arena: Evaluating Domain Robustness For Long-form Retrieval Augmented Question Answering, by Rujun Han et al.
RAG-QA Arena: Evaluating Domain Robustness for Long-form Retrieval Augmented Question Answering
by Rujun Han, Yuhao Zhang, Peng Qi, Yumo Xu, Jenyuan Wang, Lan Liu, William Yang Wang, Bonan Min, Vittorio Castelli
First submitted to arxiv on: 19 Jul 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper proposes Long-form RobustQA (LFRQA), a new dataset for question answering based on retrieval augmented generation (RAG-QA) that evaluates large language model (LLM) systems on cross-domain generalization. LFRQA consists of human-written long-form answers integrating short extractive answers from multiple documents across seven domains, covering 26K queries. The authors also introduce RAG-QA Arena, a platform for evaluating model-generated answers against LFRQA’s answers using LLMs as evaluators. Extensive experiments show that RAG-QA Arena and human judgments on answer quality are highly correlated, with only 41.3% of the most competitive LLM’s answers preferred to LFRQA’s answers. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The paper creates a new dataset for question answering that helps big language models learn across different topics. This dataset has real-life examples and is designed to test how well these models can understand and respond to questions in different areas, like science or history. The researchers also developed a way to evaluate these models’ answers using human-written responses as a benchmark. They found that most of the top-performing language models still struggle to give better answers than the ones written by humans. |
Keywords
» Artificial intelligence » Domain generalization » Large language model » Question answering » Rag » Retrieval augmented generation