Loading Now

Summary of The Impact Of Quantization on the Robustness Of Transformer-based Text Classifiers, by Seyed Parsa Neshaei et al.


The Impact of Quantization on the Robustness of Transformer-based Text Classifiers

by Seyed Parsa Neshaei, Yasaman Boreshban, Gholamreza Ghassem-Sani, Seyed Abolghasem Mirroshandel

First submitted to arxiv on: 8 Mar 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper explores the impact of quantization on the robustness of Transformer-based NLP models against adversarial attacks. It applies quantization to BERT and DistilBERT models for text classification tasks using SST-2, Emotion, and MR datasets, evaluating their performance against TextFooler, PWWS, and PSO attacks. The results show that quantization significantly improves the adversarial accuracy of the models by 18.68%, outperforming adversarial training without additional computational overhead. This highlights the effectiveness of quantization in enhancing the robustness of NLP models.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper looks at how a technique called quantization can help make language processing models more secure against fake or misleading information. The researchers tested this on two popular AI models, BERT and DistilBERT, to see if it makes them better at handling tricky text data. They used three different datasets and tested the models’ ability to handle false or misleading text. The results show that quantization can make the models more accurate by 18% when dealing with fake information, which is really useful for making AI systems more secure.

Keywords

* Artificial intelligence  * Bert  * Nlp  * Quantization  * Text classification  * Transformer