Summary of Map-neo: Highly Capable and Transparent Bilingual Large Language Model Series, by Ge Zhang et al.
MAP-Neo: Highly Capable and Transparent Bilingual Large Language Model Seriesby Ge Zhang, Scott Qu, Jiaheng…
MAP-Neo: Highly Capable and Transparent Bilingual Large Language Model Seriesby Ge Zhang, Scott Qu, Jiaheng…
Compressing Large Language Models using Low Rank and Low Precision Decompositionby Rajarshi Saha, Naomi Sagan,…
Language Generation with Strictly Proper Scoring Rulesby Chenze Shao, Fandong Meng, Yijin Liu, Jie ZhouFirst…
Understanding Intrinsic Socioeconomic Biases in Large Language Modelsby Mina Arzaghi, Florian Carichon, Golnoosh FarnadiFirst submitted…
On the Origin of Llamas: Model Tree Heritage Recoveryby Eliahu Horwitz, Asaf Shul, Yedid HoshenFirst…
2BP: 2-Stage Backpropagationby Christopher Rae, Joseph K. L. Lee, James RichingsFirst submitted to arxiv on:…
Online Merging Optimizers for Boosting Rewards and Mitigating Tax in Alignmentby Keming Lu, Bowen Yu,…
On Fairness of Low-Rank Adaptation of Large Modelsby Zhoujie Ding, Ken Ziyu Liu, Pura Peetathawatchai,…
CLAQ: Pushing the Limits of Low-Bit Post-Training Quantization for LLMsby Haoyu Wang, Bei Liu, Hang…
On the Algorithmic Bias of Aligning Large Language Models with RLHF: Preference Collapse and Matching…