Summary of The Great Ai Witch Hunt: Reviewers Perception and (mis)conception Of Generative Ai in Research Writing, by Hilda Hadan et al.
The Great AI Witch Hunt: Reviewers Perception and (Mis)Conception of Generative AI in Research Writing
by Hilda Hadan, Derrick Wang, Reza Hadi Mogavi, Joseph Tu, Leah Zhang-Kennedy, Lennart E. Nacke
First submitted to arxiv on: 27 Jun 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI); Human-Computer Interaction (cs.HC)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper investigates how peer reviewers perceive and evaluate research manuscripts augmented with generative AI (GenAI). The study surveyed 17 top-tier HCI conference peer reviewers who were presented with snippets of human-written texts and their AI-augmented counterparts. Results show that while AI-augmented writing improves readability, language diversity, and informativeness, it often lacks detailed research insights from authors. Reviewers struggled to distinguish between human- and AI-generated text but maintained consistent judgments. They noted the loss of a “human touch” and subjective expressions in AI-augmented writing. The study concludes that reviewer guidelines should prioritize impartial evaluations, focusing on the quality of research rather than personal biases towards GenAI. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper looks at how reviewers react to research papers written with the help of computers (Generative AI or GenAI). Researchers gave 17 top reviewers pieces of text – some were written by people and others were made by computer programs. The study found that using GenAI makes writing clearer, more interesting, and informative. But it also means authors’ own ideas and insights are missing. Reviewers had trouble telling human-written texts from those made by computers, but their opinions didn’t change. They thought the computer-generated texts lacked a personal touch and feelings. The researchers suggest that reviewers should just focus on how good the research is, not whether humans or computers wrote it. |