Summary of Low-rank Quantization-aware Training For Llms, by Yelysei Bondarenko et al.
Low-Rank Quantization-Aware Training for LLMsby Yelysei Bondarenko, Riccardo Del Chiaro, Markus NagelFirst submitted to arxiv…
Low-Rank Quantization-Aware Training for LLMsby Yelysei Bondarenko, Riccardo Del Chiaro, Markus NagelFirst submitted to arxiv…
Online DPO: Online Direct Preference Optimization with Fast-Slow Chasingby Biqing Qi, Pengfei Li, Fangyuan Li,…
Federated LoRA with Sparse Communicationby Kevin Kuo, Arian Raje, Kousik Rajesh, Virginia SmithFirst submitted to…
Compressible Dynamics in Deep Overparameterized Low-Rank Learning & Adaptationby Can Yaras, Peng Wang, Laura Balzano,…
Computational Limits of Low-Rank Adaptation (LoRA) for Transformer-Based Modelsby Jerry Yao-Chieh Hu, Maojiang Su, En-Jui…
Choice of PEFT Technique in Continual Learning: Prompt Tuning is Not All You Needby Martin…
QuanTA: Efficient High-Rank Fine-Tuning of LLMs with Quantum-Informed Tensor Adaptationby Zhuo Chen, Rumen Dangovski, Charlotte…
ETHER: Efficient Finetuning of Large-Scale Models with Hyperplane Reflectionsby Massimo Bini, Karsten Roth, Zeynep Akata,…
SVFT: Parameter-Efficient Fine-Tuning with Singular Vectorsby Vijay Lingam, Atula Tejaswi, Aditya Vavre, Aneesh Shetty, Gautham…
OwLore: Outlier-weighed Layerwise Sampled Low-Rank Projection for Memory-Efficient LLM Fine-tuningby Pengxiang Li, Lu Yin, Xiaowei…