Loading Now

Summary of What Did I Do Wrong? Quantifying Llms’ Sensitivity and Consistency to Prompt Engineering, by Federico Errica et al.


What Did I Do Wrong? Quantifying LLMs’ Sensitivity and Consistency to Prompt Engineering

by Federico Errica, Giuseppe Siracusano, Davide Sanvito, Roberto Bifulco

First submitted to arxiv on: 18 Jun 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Software Engineering (cs.SE)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Medium Difficulty summary: This paper introduces two novel metrics, sensitivity and consistency, designed to measure the performance of Large Language Models (LLMs) in classification tasks. These metrics are complementary to task performance, providing a more nuanced understanding of an LLM’s behavior. Sensitivity measures changes in predictions across minor variations of the prompt, without requiring ground truth labels, while consistency assesses how predictions vary for elements of the same class. The authors demonstrate the effectiveness of these metrics through an empirical comparison on text classification tasks, using them as guidelines to understand failure modes and improve LLMs’ robustness and performance.
Low GrooveSquid.com (original content) Low Difficulty Summary
Low Difficulty summary: This paper is about making sure Large Language Models (LLMs) work consistently when we ask them questions. Right now, these models are great at helping us with some tasks, but they can be tricky to work with because their answers change a little bit if we phrase the question slightly differently. The authors of this paper created two new ways to measure how well LLMs do in different situations: sensitivity and consistency. They tested these metrics on text classification tasks and found that they can help us understand when LLMs are making mistakes.

Keywords

» Artificial intelligence  » Classification  » Prompt  » Text classification