Loading Now

Summary of Cognitive Bias in Decision-making with Llms, by Jessica Echterhoff et al.


Cognitive Bias in Decision-Making with LLMs

by Jessica Echterhoff, Yao Liu, Abeer Alessa, Julian McAuley, Zexue He

First submitted to arxiv on: 25 Feb 2024

Categories

  • Main: Artificial Intelligence (cs.AI)
  • Secondary: Computation and Language (cs.CL)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper introduces BiasBuster, a framework designed to mitigate cognitive bias in large language models (LLMs) when assisting decision-making tasks. LLMs inherit societal biases and exhibit human-like cognitive bias, which can impede fair decisions. The authors develop a dataset containing 13,465 prompts to evaluate LLM decisions on different cognitive biases, such as prompt-induced, sequential, and inherent biases. They test various bias mitigation strategies and propose a novel method using LLMs to debias their own human-like cognitive bias within prompts. The analysis demonstrates the presence and effects of cognitive bias across commercial and open-source models, showing that the proposed self-help debiasing effectively mitigates model answers displaying patterns akin to human cognitive bias without requiring manual crafting for each bias.
Low GrooveSquid.com (original content) Low Difficulty Summary
This research paper is about making sure big computer language programs don’t make unfair decisions because they’re influenced by biases. These language programs are trained on data created by humans, which can mean they pick up on human biases too. This can be a problem when the program is helping people make important decisions. The authors of this paper developed a way to test and fix these biases in language programs. They tested different methods to correct for biases and found one that works well. This study helps us understand how biases affect computer language programs and shows how we can improve them to make better decisions.

Keywords

» Artificial intelligence  » Prompt