Summary of Human Bias in the Face Of Ai: the Role Of Human Judgement in Ai Generated Text Evaluation, by Tiffany Zhu et al.
Human Bias in the Face of AI: The Role of Human Judgement in AI Generated Text Evaluation
by Tiffany Zhu, Iain Weissburg, Kexun Zhang, William Yang Wang
First submitted to arxiv on: 29 Sep 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI); Human-Computer Interaction (cs.HC)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This study investigates how biases influence people’s perceptions of AI-generated content versus human-generated content. The researchers conducted three experiments involving text rephrasing, news article summarization, and persuasive writing. They found that when participants were presented with unlabeled texts, they couldn’t distinguish between AI- and human-generated content. However, when the labels “Human Generated” or “AI Generated” were added, participants overwhelmingly preferred the human-generated content (over 30% preference). This bias against AI-generated content persists even when the labels are intentionally swapped. The study highlights the limitations of human judgment in interacting with AI and suggests that this bias may undervalue AI’s performance. The findings have implications for improving human-AI collaboration, particularly in creative fields. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This research looks at how people feel about AI-generated writing compared to writing done by humans. The scientists did three tests where they changed texts, summarized news articles, and wrote persuasive pieces. They found that when people read the texts without labels, they couldn’t tell which ones were made by AI or humans. But when they saw “Human Written” or “AI Written” labels, most people liked the human-written content much better (over 30% more). This bias means people tend to undervalue what AI can do. The study shows that we need to work on improving how humans and AI work together. |
Keywords
» Artificial intelligence » Summarization