Summary of Skim: Any-bit Quantization Pushing the Limits Of Post-training Quantization, by Runsheng Bai et al.
SKIM: Any-bit Quantization Pushing The Limits of Post-Training Quantizationby Runsheng Bai, Bo Liu, Qiang LiuFirst…
SKIM: Any-bit Quantization Pushing The Limits of Post-Training Quantizationby Runsheng Bai, Bo Liu, Qiang LiuFirst…
A Water Efficiency Dataset for African Data Centersby Noah Shumba, Opelo Tshekiso, Pengfei Li, Giulia…
Enhancing CLIP Conceptual Embedding through Knowledge Distillationby Kuei-Chun KaoFirst submitted to arxiv on: 4 Dec…
FANAL – Financial Activity News Alerting Language Modeling Frameworkby Urjitkumar Patel, Fang-Chun Yeh, Chinmay Gondhalekar,…
Evaluating Gender Bias Transfer between Pre-trained and Prompt-Adapted Language Modelsby Natalie Mackraz, Nivedha Sivakumar, Samira…
CEGI: Measuring the trade-off between efficiency and carbon emissions for SLMs and VLMsby Abhas Kumar,…
RILQ: Rank-Insensitive LoRA-based Quantization Error Compensation for Boosting 2-bit Large Language Model Accuracyby Geonho Lee,…
COAP: Memory-Efficient Training with Correlation-Aware Gradient Projectionby Jinqi Xiao, Shen Sang, Tiancheng Zhi, Jing Liu,…
Critical Tokens Matter: Token-Level Contrastive Estimation Enhances LLM’s Reasoning Capabilityby Zicheng Lin, Tian Liang, Jiahao…
DENIAHL: In-Context Features Influence LLM Needle-In-A-Haystack Abilitiesby Hui Dai, Dan Pechi, Xinyi Yang, Garvit Banga,…