Loading Now

Summary of Towards a Robust Retrieval-based Summarization System, by Shengjie Liu et al.


Towards a Robust Retrieval-Based Summarization System

by Shengjie Liu, Jing Wu, Jingyuan Bao, Wenyi Wang, Naira Hovakimyan, Christopher G Healey

First submitted to arxiv on: 29 Mar 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI); Information Retrieval (cs.IR); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper investigates the robustness of large language models (LLMs) for retrieval augmented generation (RAG)-based summarization tasks, specifically in complex real-world scenarios. It presents an innovative evaluation framework called LogicSumm to assess LLM robustness and identifies limitations that led to the development of SummRAG, a comprehensive system for creating training dialogues and fine-tuning models to enhance robustness within LogicSumm’s scenarios. Experimental results demonstrate improved logical coherence and summarization quality using SummRAG.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper explores how well large language models can summarize information from complex sources. It creates a new way to test these models called LogicSumm, which looks at how they perform in realistic scenarios. The model didn’t do well enough, so the researchers made a system called SummRAG that helps train the model to be better. This system makes training dialogues and fine-tunes the model to work well within the scenarios tested by LogicSumm. The results show that using this system improves the summarization quality.

Keywords

» Artificial intelligence  » Fine tuning  » Rag  » Retrieval augmented generation  » Summarization