Loading Now

Summary of Malto at Semeval-2024 Task 6: Leveraging Synthetic Data For Llm Hallucination Detection, by Federico Borra et al.


MALTO at SemEval-2024 Task 6: Leveraging Synthetic Data for LLM Hallucination Detection

by Federico Borra, Claudio Savelli, Giacomo Rosso, Alkis Koudounas, Flavio Giobergia

First submitted to arxiv on: 1 Mar 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper presents a solution to the challenge of hallucinations in Large Language Models (LLMs) used for Natural Language Generation (NLG). LLMs often generate fluent yet inaccurate outputs, relying on fluency-centric metrics. To address this issue, the authors introduce two key components: a data augmentation pipeline incorporating pseudo-labelling and sentence rephrasing, and a voting ensemble from three models pre-trained on Natural Language Inference (NLI) tasks and fine-tuned on diverse datasets.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper helps to solve the problem of hallucinations in LLMs used for NLG. It’s an important step forward because it can improve the accuracy of language generation.

Keywords

* Artificial intelligence  * Data augmentation  * Inference