Summary of Compressing Large Language Models Using Low Rank and Low Precision Decomposition, by Rajarshi Saha et al.
Compressing Large Language Models using Low Rank and Low Precision Decompositionby Rajarshi Saha, Naomi Sagan,…
Compressing Large Language Models using Low Rank and Low Precision Decompositionby Rajarshi Saha, Naomi Sagan,…
Language Generation with Strictly Proper Scoring Rulesby Chenze Shao, Fandong Meng, Yijin Liu, Jie ZhouFirst…
Understanding Intrinsic Socioeconomic Biases in Large Language Modelsby Mina Arzaghi, Florian Carichon, Golnoosh FarnadiFirst submitted…
On the Origin of Llamas: Model Tree Heritage Recoveryby Eliahu Horwitz, Asaf Shul, Yedid HoshenFirst…
2BP: 2-Stage Backpropagationby Christopher Rae, Joseph K. L. Lee, James RichingsFirst submitted to arxiv on:…
Online Merging Optimizers for Boosting Rewards and Mitigating Tax in Alignmentby Keming Lu, Bowen Yu,…
On Fairness of Low-Rank Adaptation of Large Modelsby Zhoujie Ding, Ken Ziyu Liu, Pura Peetathawatchai,…
CLAQ: Pushing the Limits of Low-Bit Post-Training Quantization for LLMsby Haoyu Wang, Bei Liu, Hang…
Planning with Multi-Constraints via Collaborative Language Agentsby Cong Zhang, Derrick Goh Xin Deik, Dexun Li,…
On the Algorithmic Bias of Aligning Large Language Models with RLHF: Preference Collapse and Matching…