Summary of Lolcats: on Low-rank Linearizing Of Large Language Models, by Michael Zhang et al.
LoLCATs: On Low-Rank Linearizing of Large Language Modelsby Michael Zhang, Simran Arora, Rahul Chalamala, Alan…
LoLCATs: On Low-Rank Linearizing of Large Language Modelsby Michael Zhang, Simran Arora, Rahul Chalamala, Alan…
Retrieval Instead of Fine-tuning: A Retrieval-based Parameter Ensemble for Zero-shot Learningby Pengfei Jin, Peng Shu,…
AM-SAM: Automated Prompting and Mask Calibration for Segment Anything Modelby Yuchen Li, Li Zhang, Youwei…
BiDoRA: Bi-level Optimization-Based Weight-Decomposed Low-Rank Adaptationby Peijia Qin, Ruiyi Zhang, Pengtao XieFirst submitted to arxiv…
ALLoRA: Adaptive Learning Rate Mitigates LoRA Fatal Flawsby Hai Huang, Randall BalestrieroFirst submitted to arxiv…
Randomized Asymmetric Chain of LoRA: The First Meaningful Theoretical Framework for Low-Rank Adaptationby Grigory Malinovsky,…
One Initialization to Rule them All: Fine-tuning via Explained Variance Adaptationby Fabian Paischer, Lukas Hauzenberger,…
Neutral residues: revisiting adapters for model extensionby Franck Signe Talla, Herve Jegou, Edouard GraveFirst submitted…
Selective Aggregation for Low-Rank Adaptation in Federated Learningby Pengxin Guo, Shuang Zeng, Yanran Wang, Huijie…
DLP-LoRA: Efficient Task-Specific LoRA Fusion with a Dynamic, Lightweight Plugin for Large Language Modelsby Yuxuan…