Summary of Towards Infinite-long Prefix in Transformer, by Yingyu Liang et al.
Towards Infinite-Long Prefix in Transformerby Yingyu Liang, Zhenmei Shi, Zhao Song, Chiwun YangFirst submitted to…
Towards Infinite-Long Prefix in Transformerby Yingyu Liang, Zhenmei Shi, Zhao Song, Chiwun YangFirst submitted to…
Can Low-Rank Knowledge Distillation in LLMs be Useful for Microelectronic Reasoning?by Nirjhor Rouf, Fin Amin,…
Sparse High Rank Adaptersby Kartikeya Bhardwaj, Nilesh Prasad Pandey, Sweta Priyadarshi, Viswanath Ganapathy, Shreya Kadambi,…
LaMDA: Large Model Fine-Tuning via Spectrally Decomposed Low-Dimensional Adaptationby Seyedarmin Azizi, Souvik Kundu, Massoud PedramFirst…
VIRL: Volume-Informed Representation Learning towards Few-shot Manufacturability Estimationby Yu-hsuan Chen, Jonathan Cagan, Levent Burak karaFirst…
Mixture-of-Subspaces in Low-Rank Adaptationby Taiqiang Wu, Jiahao Wang, Zhe Zhao, Ngai WongFirst submitted to arxiv…
Promoting Data and Model Privacy in Federated Learning through Quantized LoRAby JianHao Zhu, Changze Lv,…
Vertical LoRA: Dense Expectation-Maximization Interpretation of Transformersby Zhuolin FuFirst submitted to arxiv on: 13 Jun…
The Impact of Initialization on LoRA Finetuning Dynamicsby Soufiane Hayou, Nikhil Ghosh, Bin YuFirst submitted…
SwitchLoRA: Switched Low-Rank Adaptation Can Learn Full-Rank Informationby Kaiye Zhou, Shucheng Wang, Jun XuFirst submitted…