Summary of Are Compressed Language Models Less Subgroup Robust?, by Leonidas Gee et al.
Are Compressed Language Models Less Subgroup Robust?by Leonidas Gee, Andrea Zugarini, Novi QuadriantoFirst submitted to…
Are Compressed Language Models Less Subgroup Robust?by Leonidas Gee, Andrea Zugarini, Novi QuadriantoFirst submitted to…
Fine-Tuning Pre-trained Language Models to Detect In-Game Trash Talksby Daniel Fesalbon, Arvin De La Cruz,…
Towards Unsupervised Question Answering System with Multi-level Summarization for Legal Textby M Manvith Prabhu, Haricharana…
TT-BLIP: Enhancing Fake News Detection Using BLIP and Tri-Transformerby Eunjee Choi, Jong-Kook KimFirst submitted to…
Finding the Missing Data: A BERT-inspired Approach Against Package Loss in Wireless Sensingby Zijian Zhao,…
Evaluating Named Entity Recognition: A comparative analysis of mono- and multilingual transformer models on a…
Narrative Feature or Structured Feature? A Study of Large Language Models to Identify Cancer Patients…
Fisher Mask Nodes for Language Model Mergingby Thennal D K, Ganesh Nathan, Suchithra M SFirst…
Research on the Application of Deep Learning-based BERT Model in Sentiment Analysisby Yichao Wu, Zhengyu…
The Impact of Quantization on the Robustness of Transformer-based Text Classifiersby Seyed Parsa Neshaei, Yasaman…