Summary of Improved Few-shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses, by Xiaosen Zheng et al.
Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defensesby Xiaosen Zheng, Tianyu Pang,…
Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defensesby Xiaosen Zheng, Tianyu Pang,…
MagR: Weight Magnitude Reduction for Enhancing Post-Training Quantizationby Aozhong Zhang, Naigang Wang, Yanxia Deng, Xin…
Evaluating Mathematical Reasoning of Large Language Models: A Focus on Error Identification and Correctionby Xiaoyuan…
How Random is Random? Evaluating the Randomness and Humaness of LLMs’ Coin Flipsby Katherine Van…
Outliers and Calibration Sets have Diminishing Effect on Quantization of Modern LLMsby Davide Paglieri, Saurabh…
Effective Interplay between Sparsity and Quantization: From Theory to Practiceby Simla Burcu Harma, Ayan Chakraborty,…
Improving Generalization and Convergence by Enhancing Implicit Regularizationby Mingze Wang, Jinbo Wang, Haotian He, Zilin…
The Point of View of a Sentiment: Towards Clinician Bias Detection in Psychiatric Notesby Alissa…
SpecDec++: Boosting Speculative Decoding via Adaptive Candidate Lengthsby Kaixuan Huang, Xudong Guo, Mengdi WangFirst submitted…
MAP-Neo: Highly Capable and Transparent Bilingual Large Language Model Seriesby Ge Zhang, Scott Qu, Jiaheng…