Summary of Elitr-bench: a Meeting Assistant Benchmark For Long-context Language Models, by Thibaut Thonet et al.
ELITR-Bench: A Meeting Assistant Benchmark for Long-Context Language Models
by Thibaut Thonet, Jos Rozen, Laurent Besacier
First submitted to arxiv on: 29 Mar 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This research proposes a new benchmark for Large Language Models (LLMs) focused on a practical meeting assistant scenario, addressing the limitations of existing benchmarks. The proposed ELITR-Bench augments the existing ELITR corpus with 271 manually crafted questions and noisy versions of meeting transcripts. Experiments with 12 long-context LLMs confirm progress across generations of proprietary and open models, highlighting discrepancies in robustness to transcript noise. A thorough analysis of GPT-4-based evaluation provides insights from a crowdsourcing study, indicating that while GPT-4’s scores align with human judges, its ability to distinguish beyond three score levels may be limited. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This research is about testing special computer models that can understand long conversations. These models are important because they could help us make machines that can help people in many situations, like taking notes at a meeting. The researchers created a new test for these models that focuses on how well they can understand and answer questions based on meeting transcripts. They tested 12 of these models and found that some are better than others at handling noisy or imperfect audio recordings. |
Keywords
» Artificial intelligence » Gpt