Summary of Deniahl: In-context Features Influence Llm Needle-in-a-haystack Abilities, by Hui Dai et al.
DENIAHL: In-Context Features Influence LLM Needle-In-A-Haystack Abilitiesby Hui Dai, Dan Pechi, Xinyi Yang, Garvit Banga,…
DENIAHL: In-Context Features Influence LLM Needle-In-A-Haystack Abilitiesby Hui Dai, Dan Pechi, Xinyi Yang, Garvit Banga,…
Training and Evaluating Language Models with Template-based Data Generationby Yifan ZhangFirst submitted to arxiv on:…
Pushing the Limits of Large Language Model Quantization via the Linearity Theoremby Vladimir Malinovskii, Andrei…
On Limitations of LLM as Annotator for Low Resource Languagesby Suramya Jadhav, Abhay Shanbhag, Amogh…
CLOVER: Cross-Layer Orthogonal Vectors Pruning and Fine-Tuningby Fanxu Meng, Pingzhi Tang, Fan jiang, Muhan ZhangFirst…
Cautious Optimizers: Improving Training with One Line of Codeby Kaizhao Liang, Lizhang Chen, Bo Liu,…
Hymba: A Hybrid-head Architecture for Small Language Modelsby Xin Dong, Yonggan Fu, Shizhe Diao, Wonmin…
Evaluating LLMs Capabilities Towards Understanding Social Dynamicsby Anique Tahir, Lu Cheng, Manuel Sandoval, Yasin N.…
Deriving Activation Functions Using Integrationby Allen Hao Huang, Imanol SchlagFirst submitted to arxiv on: 20…
CROW: Eliminating Backdoors from Large Language Models via Internal Consistency Regularizationby Nay Myat Min, Long…