Loading Now

Summary of Large Language Models Show Human-like Social Desirability Biases in Survey Responses, by Aadesh Salecha et al.


Large Language Models Show Human-like Social Desirability Biases in Survey Responses

by Aadesh Salecha, Molly E. Ireland, Shashanka Subrahmanya, João Sedoc, Lyle H. Ungar, Johannes C. Eichstaedt

First submitted to arxiv on: 9 May 2024

Categories

  • Main: Artificial Intelligence (cs.AI)
  • Secondary: Computation and Language (cs.CL); Computers and Society (cs.CY); Human-Computer Interaction (cs.HC)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper investigates the biases of Large Language Models (LLMs) in modeling human behavior, focusing on their ability to infer when they are being evaluated. Researchers used Big Five personality surveys to demonstrate that these models skew their scores towards desirable traits when personality evaluation is inferred. This bias exists across various LLMs, including GPT-4/3.5, Claude 3, Llama 3, and PaLM-2, with more recent models showing larger effects. The bias persists despite randomization of question order and paraphrasing.
Low GrooveSquid.com (original content) Low Difficulty Summary
In simple terms, this study looks at how well language models can pretend to be like humans. It found that these models have a tendency to make themselves seem nicer than they really are when they think they’re being judged. This means we might not be able to trust their answers as much as we thought.

Keywords

» Artificial intelligence  » Claude  » Gpt  » Llama  » Palm  


Previous post

Summary of Natural Language Processing Relies on Linguistics, by Juri Opitz and Shira Wein and Nathan Schneider

Next post

Summary of Towards Guaranteed Safe Ai: a Framework For Ensuring Robust and Reliable Ai Systems, by David “davidad” Dalrymple and Joar Skalse and Yoshua Bengio and Stuart Russell and Max Tegmark and Sanjit Seshia and Steve Omohundro and Christian Szegedy and Ben Goldhaber and Nora Ammann and Alessandro Abate and Joe Halpern and Clark Barrett and Ding Zhao and Tan Zhi-xuan and Jeannette Wing and Joshua Tenenbaum