Summary of Mamba State-space Models Are Lyapunov-stable Learners, by John T. Halloran et al.
Mamba State-Space Models Are Lyapunov-Stable Learnersby John T. Halloran, Manbir Gulati, Paul F. RoysdonFirst submitted…
Mamba State-Space Models Are Lyapunov-Stable Learnersby John T. Halloran, Manbir Gulati, Paul F. RoysdonFirst submitted…
Spectrum-Aware Parameter Efficient Fine-Tuning for Diffusion Modelsby Xinxi Zhang, Song Wen, Ligong Han, Felix Juefei-Xu,…
ETHER: Efficient Finetuning of Large-Scale Models with Hyperplane Reflectionsby Massimo Bini, Karsten Roth, Zeynep Akata,…
SVFT: Parameter-Efficient Fine-Tuning with Singular Vectorsby Vijay Lingam, Atula Tejaswi, Aditya Vavre, Aneesh Shetty, Gautham…
Domain-Inspired Sharpness-Aware Minimization Under Domain Shiftsby Ruipeng Zhang, Ziqing Fan, Jiangchao Yao, Ya Zhang, Yanfeng…
Safe LoRA: the Silver Lining of Reducing Safety Risks when Fine-tuning Large Language Modelsby Chia-Yi…
SPP: Sparsity-Preserved Parameter-Efficient Fine-Tuning for Large Language Modelsby Xudong Lu, Aojun Zhou, Yuhui Xu, Renrui…
Prompt Tuning Strikes Back: Customizing Foundation Models with Low-Rank Prompt Adaptationby Abhinav Jain, Swarat Chaudhuri,…
In-context Time Series Predictorby Jiecheng Lu, Yan Sun, Shihao YangFirst submitted to arxiv on: 23…
LoRA-Ensemble: Efficient Uncertainty Modelling for Self-attention Networksby Michelle Halbheer, Dominik J. Mühlematter, Alexander Becker, Dominik…