Loading Now

Summary of Improving Faithfulness Of Large Language Models in Summarization Via Sliding Generation and Self-consistency, by Taiji Li et al.


Improving Faithfulness of Large Language Models in Summarization via Sliding Generation and Self-Consistency

by Taiji Li, Zhi Li, Yin Zhang

First submitted to arxiv on: 31 Jul 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper proposes a novel approach to improve the faithfulness of large language models (LLMs) in summarization, addressing the issue of hallucinations where LLMs generate content that diverges from source articles. The authors introduce SliSum, a summary generation strategy that divides the article into overlapping windows and uses LLMs to generate local summaries, which are then aggregated using clustering and majority voting algorithms. This approach is tested on diverse LLMs, including LLaMA-2, Claude-2, and GPT-3.5, in both short and long text summarization tasks. The results show that SliSum significantly improves the faithfulness of these models while maintaining their fluency and informativeness without additional fine-tuning or resources.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper tries to make big language models better at writing summaries by making them process whole articles more fairly. Right now, these models sometimes get information wrong or focus on what’s at the beginning and end of an article. The authors came up with a new way to generate summaries called SliSum that breaks down the article into smaller parts and uses the model to write a summary for each part. Then, it combines all those summaries together to make a better summary. They tested this approach on different models and found that it works really well without needing any extra training or resources.

Keywords

» Artificial intelligence  » Claude  » Clustering  » Fine tuning  » Gpt  » Llama  » Summarization