Summary of More Fine-tuning with 10x Fewer Parameters, by Wenxuan Tan et al.
MoRe Fine-Tuning with 10x Fewer Parametersby Wenxuan Tan, Nicholas Roberts, Tzu-Heng Huang, Jitian Zhao, John…
MoRe Fine-Tuning with 10x Fewer Parametersby Wenxuan Tan, Nicholas Roberts, Tzu-Heng Huang, Jitian Zhao, John…
SORSA: Singular Values and Orthonormal Regularized Singular Vectors Adaptation of Large Language Modelsby Yang CaoFirst…
Instant Adversarial Purification with Adversarial Consistency Distillationby Chun Tong Lei, Hon Ming Yam, Zhongliang Guo,…
CURLoRA: Stable LLM Continual Fine-Tuning and Catastrophic Forgetting Mitigationby Muhammad FawiFirst submitted to arxiv on:…
Reprogramming Foundational Large Language Models(LLMs) for Enterprise Adoption for Spatio-Temporal Forecasting Applications: Unveiling a New…
The Ultimate Guide to Fine-Tuning LLMs from Basics to Breakthroughs: An Exhaustive Review of Technologies,…
NoRA: Nested Low-Rank Adaptation for Efficient Fine-Tuning Large Modelsby Cheng Lin, Lujun Li, Dezhi Li,…
SMILE: Zero-Shot Sparse Mixture of Low-Rank Experts Construction From Pre-Trained Foundation Modelsby Anke Tang, Li…
RBLA: Rank-Based-LoRA-Aggregation for Fine-tuning Heterogeneous Models in FLaaSby Shuaijun Chen, Omid Tavallaie, Niousha Nazemi, Albert…
Towards Robust and Parameter-Efficient Knowledge Unlearning for LLMsby Sungmin Cha, Sungjun Cho, Dasol Hwang, Moontae…