Loading Now

Summary of Hey Gpt, Can You Be More Racist? Analysis From Crowdsourced Attempts to Elicit Biased Content From Generative Ai, by Hangzhi Guo et al.


Hey GPT, Can You be More Racist? Analysis from Crowdsourced Attempts to Elicit Biased Content from Generative AI

by Hangzhi Guo, Pranav Narayanan Venkit, Eunchae Jang, Mukund Srinath, Wenbo Zhang, Bonam Mingole, Vipul Gupta, Kush R. Varshney, S. Shyam Sundar, Amulya Yadav

First submitted to arxiv on: 20 Oct 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI); Human-Computer Interaction (cs.HC)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The study investigates how non-experts perceive and interact with biases in large language models (LLMs) and generative AI (GenAI) tools, highlighting the importance of understanding societal biases inherent within these technologies. The research presents findings from a university-level competition that challenged participants to design prompts for eliciting biased outputs from GenAI tools. Participants’ submissions are analyzed quantitatively and qualitatively to identify diverse set of biases in GenAI and strategies employed to induce bias. This work informs model developers on mitigating bias.
Low GrooveSquid.com (original content) Low Difficulty Summary
This study looks at how people without technical expertise interact with biases in AI language models. It’s like when you use a chatbot that sometimes says weird or unfair things. The researchers wanted to know what makes these AI systems say those things and what we can do to make them be more fair. They found that people tried different ways to get the AI system to say biased things, like asking it to make jokes about certain groups of people. This study helps us understand how to fix this problem.

Keywords

» Artificial intelligence