Summary of Lora Training in the Ntk Regime Has No Spurious Local Minima, by Uijeong Jang et al.
LoRA Training in the NTK Regime has No Spurious Local Minimaby Uijeong Jang, Jason D.…
LoRA Training in the NTK Regime has No Spurious Local Minimaby Uijeong Jang, Jason D.…
QDyLoRA: Quantized Dynamic Low-Rank Adaptation for Efficient Large Language Model Tuningby Hossein Rajabzadeh, Mojtaba Valipour,…
LoRA-drop: Efficient LoRA Parameter Pruning based on Output Evaluationby Hongyun Zhou, Xiangyu Lu, Wang Xu,…
Flora: Low-Rank Adapters Are Secretly Gradient Compressorsby Yongchang Hao, Yanshuai Cao, Lili MouFirst submitted to…
Riemannian Preconditioned LoRA for Fine-Tuning Foundation Modelsby Fangzhao Zhang, Mert PilanciFirst submitted to arxiv on:…
A Framework to Implement 1+N Multi-task Fine-tuning Pattern in LLMs Using the CGC-LORA Algorithmby Chao…
LoTR: Low Tensor Rank Weight Adaptationby Daniel Bershatsky, Daria Cherniuk, Talgat Daulbaev, Aleksandr Mikhalev, Ivan…
Convolution Meets LoRA: Parameter Efficient Finetuning for Segment Anything Modelby Zihan Zhong, Zhiqiang Tang, Tong…
Investigating Training Strategies and Model Robustness of Low-Rank Adaptation for Language Modeling in Speech Recognitionby…
Solving Continual Offline Reinforcement Learning with Decision Transformerby Kaixin Huang, Li Shen, Chen Zhao, Chun…