Summary of Lora-pro: Are Low-rank Adapters Properly Optimized?, by Zhengbo Wang et al.
LoRA-Pro: Are Low-Rank Adapters Properly Optimized?by Zhengbo Wang, Jian Liang, Ran He, Zilei Wang, Tieniu…
LoRA-Pro: Are Low-Rank Adapters Properly Optimized?by Zhengbo Wang, Jian Liang, Ran He, Zilei Wang, Tieniu…
Accurate and Efficient Fine-Tuning of Quantized Large Language Models Through Optimal Balanceby Ao Shen, Qiang…
Rapid Switching and Multi-Adapter Fusion via Sparse High Rank Adaptersby Kartikeya Bhardwaj, Nilesh Prasad Pandey,…
Enhancing Parameter Efficiency and Generalization in Large-Scale Models: A Regularized and Masked Low-Rank Adaptation Approachby…
A Survey on LoRA of Large Language Modelsby Yuren Mao, Yuhang Ge, Yijiang Fan, Wenyi…
DataDream: Few-shot Guided Dataset Generationby Jae Myung Kim, Jessica Bader, Stephan Alaniz, Cordelia Schmid, Zeynep…
On Large Language Model Continual Unlearningby Chongyang Gao, Lixu Wang, Kaize Ding, Chenkai Weng, Xiao…
RoLoRA: Fine-tuning Rotated Outlier-free LLMs for Effective Weight-Activation Quantizationby Xijie Huang, Zechun Liu, Shih-Yang Liu,…
ROSA: Random Subspace Adaptation for Efficient Fine-Tuningby Marawan Gamal Abdel Hameed, Aristides Milios, Siva Reddy,…
If You Don’t Understand It, Don’t Use It: Eliminating Trojans with Filters Between Layersby Adriano…