Summary of Lora-fair: Federated Lora Fine-tuning with Aggregation and Initialization Refinement, by Jieming Bian et al.
LoRA-FAIR: Federated LoRA Fine-Tuning with Aggregation and Initialization Refinementby Jieming Bian, Lei Wang, Letian Zhang,…
LoRA-FAIR: Federated LoRA Fine-Tuning with Aggregation and Initialization Refinementby Jieming Bian, Lei Wang, Letian Zhang,…
AutoMixQ: Self-Adjusting Quantization for High Performance Memory-Efficient Fine-Tuningby Changhai Zhou, Shiyang Zhang, Yuhua Zhou, Zekai…
On the Way to LLM Personalization: Learning to Remember User Conversationsby Lucie Charlotte Magister, Katherine…
Federated Low-Rank Adaptation with Differential Privacy over Wireless Networksby Tianqu Kang, Zixin Wang, Hengtao He,…
LLM-NEO: Parameter Efficient Knowledge Distillation for Large Language Modelsby Runming Yang, Taiqiang Wu, Jiahao Wang,…
Federated LLMs Fine-tuned with Adaptive Importance-Aware LoRAby Yang Su, Na Yan, Yansha DengFirst submitted to…
LLM-R: A Framework for Domain-Adaptive Maintenance Scheme Generation Combining Hierarchical Agents and RAGby Laifa Tao,…
Variational Low-Rank Adaptation Using IVONby Bai Cong, Nico Daheim, Yuesong Shen, Daniel Cremers, Rio Yokota,…
Dual Low-Rank Adaptation for Continual Learning with Pre-Trained Modelsby Huancheng Chen, Jingtao Li, Nidham Gazagnadou,…
Exploring Gradient Subspaces: Addressing and Overcoming LoRA’s Limitations in Federated Fine-Tuning of Large Language Modelsby…