Loading Now

Summary of What’s Wrong? Refining Meeting Summaries with Llm Feedback, by Frederic Kirstein and Terry Ruas and Bela Gipp


What’s Wrong? Refining Meeting Summaries with LLM Feedback

by Frederic Kirstein, Terry Ruas, Bela Gipp

First submitted to arxiv on: 16 Jul 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed multi-LLM correction approach demonstrates significant promise in improving the quality of meeting summaries by leveraging multiple large language models (LLMs) to validate output quality. The two-phase process mimics the human review process, involving mistake identification and summary refinement. This innovative approach is built on a dataset called QMSum Mistake, comprising 200 automatically generated meeting summaries annotated by humans across nine error types. Experimental results indicate that LLMs can accurately identify these errors, transforming them into actionable feedback to enhance the quality of the summary measured by relevance, informativeness, conciseness, and coherence. The post-hoc refinement process successfully improves summary quality, showcasing potential for similar complex text generation tasks.
Low GrooveSquid.com (original content) Low Difficulty Summary
Meeting summarization is a crucial task in today’s digital world, where large language models (LLMs) can help. These models are good at making summaries, but they sometimes get things wrong. To fix this, researchers introduced a new way to correct meeting summaries using multiple LLMs. It works like a human reviewer would, first finding mistakes and then fixing them. The team created a special dataset with 200 summaries that have errors, which they used to train the models. The results show that these models can find mistakes with high accuracy. By giving feedback based on these mistakes, the quality of the summary improves, making it more relevant, informative, and easy to understand.

Keywords

» Artificial intelligence  » Summarization  » Text generation