Summary of Neat: Nonlinear Parameter-efficient Adaptation Of Pre-trained Models, by Yibo Zhong et al.
NEAT: Nonlinear Parameter-efficient Adaptation of Pre-trained Modelsby Yibo Zhong, Haoxiang Jiang, Lincan Li, Ryumei Nakada,…
NEAT: Nonlinear Parameter-efficient Adaptation of Pre-trained Modelsby Yibo Zhong, Haoxiang Jiang, Lincan Li, Ryumei Nakada,…
Discrete Copula Diffusionby Anji Liu, Oliver Broadrick, Mathias Niepert, Guy Van den BroeckFirst submitted to…
House of Cards: Massive Weights in LLMsby Jaehoon Oh, Seungjun Shin, Dokwan OhFirst submitted to…
Not All LLM Reasoners Are Created Equalby Arian Hosseini, Alessandro Sordoni, Daniel Toyama, Aaron Courville,…
DLP-LoRA: Efficient Task-Specific LoRA Fusion with a Dynamic, Lightweight Plugin for Large Language Modelsby Yuxuan…
Speculative Coreset Selection for Task-Specific Fine-tuningby Xiaoyu Zhang, Juan Zhai, Shiqing Ma, Chao Shen, Tianlin…
Layer Swapping for Zero-Shot Cross-Lingual Transfer in Large Language Modelsby Lucas Bandarkar, Benjamin Muller, Pritish…
FlashMask: Efficient and Rich Mask Extension of FlashAttentionby Guoxia Wang, Jinle Zeng, Xiyuan Xiao, Siming…
A Knowledge-Informed Large Language Model Framework for U.S. Nuclear Power Plant Shutdown Initiating Event Classification…
MoS: Unleashing Parameter Efficiency of Low-Rank Adaptation with Mixture of Shardsby Sheng Wang, Liheng Chen,…