Summary of Evaluating Text Classification Robustness to Part-of-speech Adversarial Examples, by Anahita Samadi and Allison Sullivan
Evaluating Text Classification Robustness to Part-of-Speech Adversarial Examples
by Anahita Samadi, Allison Sullivan
First submitted to arxiv on: 15 Aug 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed research investigates the vulnerability of convolutional neural networks (CNNs) in text-based classification systems to adversarial examples, which aim to preserve semantics while tricking the decision-making process. The study focuses on identifying the most impactful parts of speech for text-based classifiers and highlights a bias in CNN algorithms against certain linguistic tokens within review datasets. By understanding these vulnerabilities, the paper aims to improve the quality of text-based adversarial examples. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Machine learning systems are becoming increasingly important, especially in safety-critical applications. However, these systems can be tricked by “adversarial examples” that look normal but make the system behave incorrectly. For text-based classification systems, this means trying to change the input text without making it look weird or unusual. But recent research has shown that even when trying to preserve the meaning of the text, the goal is often not met. The study wants to find out which parts of language are most important for these classifiers and how they can be tricked. |
Keywords
» Artificial intelligence » Classification » Cnn » Machine learning » Semantics