Summary of Efficient Adaptation Of Pre-trained Vision Transformer Via Householder Transformation, by Wei Dong et al.
Efficient Adaptation of Pre-trained Vision Transformer via Householder Transformationby Wei Dong, Yuan Sun, Yiting Yang,…
Efficient Adaptation of Pre-trained Vision Transformer via Householder Transformationby Wei Dong, Yuan Sun, Yiting Yang,…
Tailored-LLaMA: Optimizing Few-Shot Learning in Pruned LLaMA Models with Task-Specific Promptsby Danyal Aftab, Steven DavyFirst…
Ali-AUG: Innovative Approaches to Labeled Data Augmentation using One-Step Diffusion Modelby Ali Hamza, Aizea Lojo,…
Understanding Layer Significance in LLM Alignmentby Guangyuan Shi, Zexin Lu, Xiaoyu Dong, Wenlong Zhang, Xuanyu…
FairLoRA: Unpacking Bias Mitigation in Vision Models with Fairness-Driven Low-Rank Adaptationby Rohan Sukumaran, Aarash Feizi,…
Can Large Language Models Act as Ensembler for Multi-GNNs?by Hanqi Duan, Yao Cheng, Jianxiang Yu,…
Controlled Low-Rank Adaptation with Subspace Regularization for Continued Training on Large Language Modelsby Yuheng Lu,…
How to Build a Pre-trained Multimodal model for Simultaneously Chatting and Decision-making?by Zuojin Tang, Bin…
Habaek: High-performance water segmentation through dataset expansion and inductive bias optimizationby Hanseon Joo, Eunji Lee,…
Improving the Language Understanding Capabilities of Large Language Models Using Reinforcement Learningby Bokai Hu, Sai…