Summary of Training-free Bayesianization For Low-rank Adapters Of Large Language Models, by Haizhou Shi et al.
Training-Free Bayesianization for Low-Rank Adapters of Large Language Modelsby Haizhou Shi, Yibin Wang, Ligong Han,…
Training-Free Bayesianization for Low-Rank Adapters of Large Language Modelsby Haizhou Shi, Yibin Wang, Ligong Han,…
Closed-Loop Supervised Fine-Tuning of Tokenized Traffic Modelsby Zhejun Zhang, Peter Karkus, Maximilian Igl, Wenhao Ding,…
CALICO: Conversational Agent Localization via Synthetic Data Generationby Andy Rosenbaum, Pegah Kharazmi, Ershad Banijamali, Lu…
CompCap: Improving Multimodal Large Language Models with Composite Captionsby Xiaohui Chen, Satya Narayan Shukla, Mahmoud…
EACO: Enhancing Alignment in Multimodal LLMs via Critical Observationby Yongxin Wang, Meng Cao, Haokun Lin,…
Gla-AI4BioMed at RRG24: Visual Instruction-tuned Adaptation for Radiology Report Generationby Xi Zhang, Zaiqiao Meng, Jake…
One Communication Round is All It Needs for Federated Fine-Tuning Foundation Modelsby Ziyao Wang, Bowei…
Prompting Large Language Models for Clinical Temporal Relation Extractionby Jianping He, Laila Rasmy, Haifang Li,…
Quantifying the Limits of Segmentation Foundation Models: Modeling Challenges in Segmenting Tree-Like and Low-Contrast Objectsby…
Transferring self-supervised pre-trained models for SHM data anomaly detection with scarce labeled databy Mingyuan Zhou,…