Loading Now

Summary of Premise Order Matters in Reasoning with Large Language Models, by Xinyun Chen et al.


Premise Order Matters in Reasoning with Large Language Models

by Xinyun Chen, Ryan A. Chi, Xuezhi Wang, Denny Zhou

First submitted to arxiv on: 14 Feb 2024

Categories

  • Main: Artificial Intelligence (cs.AI)
  • Secondary: Computation and Language (cs.CL)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Large language models (LLMs) have achieved impressive reasoning performance across various domains. However, they exhibit surprising brittleness when it comes to premise ordering, despite this not affecting the underlying task. Specifically, LLMs perform best when premises are presented in the same order as required for intermediate reasoning steps. For example, deductive reasoning tasks show significant accuracy boosts when premises align with the ground truth proof prompt, rather than being randomly ordered. Our evaluation reveals that permuting premise order can cause a performance drop of over 30%. We also release R-GSM, based on GSM8K, to examine this effect for mathematical problem-solving and observe a substantial accuracy decline relative to the original benchmark.
Low GrooveSquid.com (original content) Low Difficulty Summary
Large language models are super smart at doing math problems! But did you know that they get stuck if the order of the clues is mixed up? It’s like trying to solve a puzzle with all the pieces in the wrong place. The researchers looked at how these models do when given clues in different orders and found out that it makes a big difference. They even created a special test called R-GSM to see how well they do when the clues are jumbled up.

Keywords

* Artificial intelligence  * Prompt