Loading Now

Summary of Assessing Contamination in Large Language Models: Introducing the Logprober Method, by Nicolas Yax and Pierre-yves Oudeyer and Stefano Palminteri


Assessing Contamination in Large Language Models: Introducing the LogProber method

by Nicolas Yax, Pierre-Yves Oudeyer, Stefano Palminteri

First submitted to arxiv on: 26 Aug 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed paper introduces LogProber, a novel algorithm for detecting contamination in Large Language Models (LLMs). Contamination refers to the issue where testing data leaks into the training set, which is particularly problematic when evaluating LLMs trained on massive web-scraped corpora. The authors highlight the need for tools that can quantify contamination on short text sequences common in psychology questionnaires. LogProber uses token probability in given sentences to detect contamination efficiently and accurately. The paper also explores the limitations of the method and discusses how different training methods can contaminate models without leaving traces in token probabilities.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about a new tool that helps ensure large language models are fair and accurate. These models are trained on huge amounts of text from the internet, which can be a problem because testing data might accidentally get mixed in with the training data. This issue is especially important when evaluating these models for tasks like understanding psychology questionnaires. The new algorithm, called LogProber, uses sentence-level statistics to detect contamination and prevent biased results.

Keywords

» Artificial intelligence  » Probability  » Token