Loading Now

Summary of Mind Your Step (by Step): Chain-of-thought Can Reduce Performance on Tasks Where Thinking Makes Humans Worse, by Ryan Liu et al.


Mind Your Step (by Step): Chain-of-Thought can Reduce Performance on Tasks where Thinking Makes Humans Worse

by Ryan Liu, Jiayi Geng, Addison J. Wu, Ilia Sucholutsky, Tania Lombrozo, Thomas L. Griffiths

First submitted to arxiv on: 27 Oct 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Computation and Language (cs.CL); Computers and Society (cs.CY)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper investigates the Chain-of-thought (CoT) prompting strategy for large language and multimodal models. While CoT has shown promise across many tasks, determining its effectiveness remains an open question. Specifically, it is unclear when CoT systematically reduces model performance. To address this, the authors draw inspiration from cognitive psychology, focusing on cases where verbal thinking or deliberation hurts human performance. They examine three such cases: implicit statistical learning, visual recognition, and classifying with patterns containing exceptions. The authors find that state-of-the-art models exhibit significant drop-offs in performance (up to 36.3% absolute accuracy) when using inference-time reasoning compared to zero-shot counterparts. They also identify tasks where verbal thinking reduces human performance but CoT retains or increases model performance. Overall, the paper shows that considering cases where thinking has negative consequences for humans can help identify settings where it negatively impacts models.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper studies how Chain-of-thought (CoT) prompting affects large language and multimodal models. CoT is a way to make these models think more like humans, but we don’t know when this helps or hurts their performance. The authors look at special cases where people’s thinking actually makes things worse. They find that most models do better without thinking too much. This research can help us understand why some prompts work better than others and how our own thinking affects the way these models work.

Keywords

» Artificial intelligence  » Inference  » Prompting  » Zero shot