Summary of Lisa: Layerwise Importance Sampling For Memory-efficient Large Language Model Fine-tuning, by Rui Pan et al.
LISA: Layerwise Importance Sampling for Memory-Efficient Large Language Model Fine-Tuningby Rui Pan, Xiang Liu, Shizhe…
LISA: Layerwise Importance Sampling for Memory-Efficient Large Language Model Fine-Tuningby Rui Pan, Xiang Liu, Shizhe…
A Unified Module for Accelerating STABLE-DIFFUSION: LCM-LORAby Ayush Thakur, Rashmi VashisthFirst submitted to arxiv on:…
A Three-Phases SFT Hybrid Model Integrated Strong Prior Module and Data Overlap Estimation in the…
A Single Linear Layer Yields Task-Adapted Low-Rank Matricesby Hwichan Kim, Shota Sasaki, Sho Hoshino, Ukyo…
KnowLA: Enhancing Parameter-efficient Finetuning with Knowledgeable Adaptationby Xindi Luo, Zequn Sun, Jing Zhao, Zhe Zhao,…
AFLoRA: Adaptive Freezing of Low Rank Adaptation in Parameter Efficient Fine-Tuning of Large Modelsby Zeyu…
BiLoRA: A Bi-level Optimization Framework for Overfitting-Resilient Low-Rank Adaptation of Large Pre-trained Modelsby Rushi Qiang,…
FinLlama: Financial Sentiment Classification for Algorithmic Trading Applicationsby Thanos Konstantinidis, Giorgos Iacovides, Mingxue Xu, Tony…
Improving LoRA in Privacy-preserving Federated Learningby Youbang Sun, Zitao Li, Yaliang Li, Bolin DingFirst submitted…
SuperLoRA: Parameter-Efficient Unified Adaptation of Multi-Layer Attention Modulesby Xiangyu Chen, Jing Liu, Ye Wang, Pu…