Summary of Controlling Out-of-domain Gaps in Llms For Genre Classification and Generated Text Detection, by Dmitri Roussinov et al.
Controlling Out-of-Domain Gaps in LLMs for Genre Classification and Generated Text Detection
by Dmitri Roussinov, Serge Sharoff, Nadezhda Puchnina
First submitted to arxiv on: 29 Dec 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This study examines the limitations of Large Language Models (LLMs) such as GPT-4, finding that they suffer from an out-of-domain (OOD) performance gap similar to pre-trained Language Models (PLMs) like BERT. The research focuses on two non-topical classification tasks: genre classification and generated text detection. By testing LLMs trained with in-context learning (ICL) examples from one domain on another, the study reveals a significant decline in classification performance when transitioning between domains. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper looks at how well large language models do when they’re tested on things they weren’t trained for. The researchers found that even the best models, like GPT-4, struggle to make good predictions when shown examples from a different topic or area of expertise. They did this by trying to identify genres of writing and detect fake text in two different scenarios. Overall, the study shows that these powerful language tools still have limitations and can’t always generalize well outside their training data. |
Keywords
» Artificial intelligence » Bert » Classification » Gpt