Summary of Efficient Continual Pre-training Of Llms For Low-resource Languages, by Arijit Nag et al.
Efficient Continual Pre-training of LLMs for Low-resource Languagesby Arijit Nag, Soumen Chakrabarti, Animesh Mukherjee, Niloy…
Efficient Continual Pre-training of LLMs for Low-resource Languagesby Arijit Nag, Soumen Chakrabarti, Animesh Mukherjee, Niloy…
Lexico: Extreme KV Cache Compression via Sparse Coding over Universal Dictionariesby Junhyuck Kim, Jongho Park,…
Adversarial Vulnerabilities in Large Language Models for Time Series Forecastingby Fuqiang Liu, Sicong Jiang, Luis…
Low-Rank Correction for Quantized LLMsby Meyer Scetbon, James HensmanFirst submitted to arxiv on: 10 Dec…
Fully Open Source Moxin-7B Technical Reportby Pu Zhao, Xuan Shen, Zhenglun Kong, Yixin Shen, Sung-En…
Taming Sensitive Weights : Noise Perturbation Fine-tuning for Robust LLM Quantizationby Dongwei Wang, Huanrui YangFirst…
AlphaVerus: Bootstrapping Formally Verified Code Generation through Self-Improving Translation and Treefinementby Pranjal Aggarwal, Bryan Parno,…
Towards Learning to Reason: Comparing LLMs with Neuro-Symbolic on Arithmetic Relations in Abstract Reasoningby Michael…
Frontier Models are Capable of In-context Schemingby Alexander Meinke, Bronson Schoen, Jérémy Scheurer, Mikita Balesni,…
Direct Quantized Training of Language Models with Stochastic Roundingby Kaiyan Zhao, Tsuguchika Tabaru, Kenichi Kobayashi,…