Summary of Dupe: Detection Undermining Via Prompt Engineering For Deepfake Text, by James Weichert and Chinecherem Dimobi
DUPE: Detection Undermining via Prompt Engineering for Deepfake Text
by James Weichert, Chinecherem Dimobi
First submitted to arxiv on: 17 Apr 2024
Categories
- Main: Artificial Intelligence (cs.AI)
- Secondary: None
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper investigates the accuracy of publicly-available artificial intelligence (AI) text detectors in identifying human-written versus AI-generated essays. The authors evaluate three AI text detectors – Kirchenbauer et al. watermarks, ZeroGPT, and GPTZero – against both human and AI-generated essays. They find that watermarking results in a high false positive rate, while ZeroGPT has both high false positive and false negative rates. Moreover, the researchers demonstrate that using ChatGPT 3.5 to paraphrase original AI-generated texts significantly increases the false negative rate of all detectors, effectively bypassing them. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This study looks at how well artificial intelligence (AI) can detect whether a piece of writing is written by a human or a computer program. The researchers tested three different AI tools against both real essays and fake ones generated by computers. They found that these AI tools are not very good at detecting the difference, which could be a problem for students who use these tools to write their school assignments. The study also shows that if you change the words of an AI-generated text slightly, it becomes harder for the AI tools to detect whether it’s human-written or not. |