Loading Now

Summary of Model Internals-based Answer Attribution For Trustworthy Retrieval-augmented Generation, by Jirui Qi et al.


Model Internals-based Answer Attribution for Trustworthy Retrieval-Augmented Generation

by Jirui Qi, Gabriele Sarti, Raquel Fernández, Arianna Bisazza

First submitted to arxiv on: 19 Jun 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Recently, self-citation prompting was proposed to make large language models (LLMs) generate citations to supporting documents along with their answers in question answering (QA) applications. However, this approach often struggles to match the required format, refer to non-existent sources, and fails to faithfully reflect LLMs’ context usage throughout generation. To address these limitations, we present MIRAGE –Model Internals-based RAG Explanations–, a plug-and-play approach using model internals for faithful answer attribution in RAG applications. MIRAGE detects context-sensitive answer tokens and pairs them with retrieved documents contributing to their prediction via saliency methods. We evaluate our proposed approach on multilingual extractive QA datasets, achieving high agreement with human answer attribution. On open-ended QA, MIRAGE achieves citation quality and efficiency comparable to self-citation while allowing for finer-grained control of attribution parameters.
Low GrooveSquid.com (original content) Low Difficulty Summary
Imagine you’re trying to get answers from a big computer model. Sometimes, these models don’t give us the right information or explain where they got their answers from. Researchers have been working on fixing this problem by having the model say which documents it used to come up with its answer. However, this approach has some issues, like not following the correct format or pointing to fake sources. To solve these problems, scientists created a new way called MIRAGE that uses information about how the computer model works to give better answers and explain where it got them from. They tested MIRAGE on many different kinds of questions and found that it worked well and gave accurate answers.

Keywords

» Artificial intelligence  » Prompting  » Question answering  » Rag