Summary of Lisa: Layerwise Importance Sampling For Memory-efficient Large Language Model Fine-tuning, by Rui Pan et al.
LISA: Layerwise Importance Sampling for Memory-Efficient Large Language Model Fine-Tuningby Rui Pan, Xiang Liu, Shizhe…
LISA: Layerwise Importance Sampling for Memory-Efficient Large Language Model Fine-Tuningby Rui Pan, Xiang Liu, Shizhe…
A Single Linear Layer Yields Task-Adapted Low-Rank Matricesby Hwichan Kim, Shota Sasaki, Sho Hoshino, Ukyo…
Improving LoRA in Privacy-preserving Federated Learningby Youbang Sun, Zitao Li, Yaliang Li, Bolin DingFirst submitted…
AutoLoRA: Automatically Tuning Matrix Ranks in Low-Rank Adaptation Based on Meta Learningby Ruiyi Zhang, Rushi…
Asymmetry in Low-Rank Adapters of Foundation Modelsby Jiacheng Zhu, Kristjan Greenewald, Kimia Nadjahi, Haitz Sáez…
PRoLoRA: Partial Rotation Empowers More Parameter-Efficient LoRAby Sheng Wang, Boyang Xue, Jiacheng Ye, Jiyue Jiang,…
CoLoRA: Continuous low-rank adaptation for reduced implicit neural modeling of parameterized partial differential equationsby Jules…
LoRA+: Efficient Low Rank Adaptation of Large Modelsby Soufiane Hayou, Nikhil Ghosh, Bin YuFirst submitted…
Uncertainty quantification in fine-tuned LLMs using LoRA ensemblesby Oleksandr Balabanov, Hampus LinanderFirst submitted to arxiv…
Privacy-Preserving Low-Rank Adaptation against Membership Inference Attacks for Latent Diffusion Modelsby Zihao Luo, Xilie Xu,…