Summary of Endowing Pre-trained Graph Models with Provable Fairness, by Zhongjian Zhang et al.
Endowing Pre-trained Graph Models with Provable Fairnessby Zhongjian Zhang, Mengmei Zhang, Yue Yu, Cheng Yang,…
Endowing Pre-trained Graph Models with Provable Fairnessby Zhongjian Zhang, Mengmei Zhang, Yue Yu, Cheng Yang,…
LoRETTA: Low-Rank Economic Tensor-Train Adaptation for Ultra-Low-Parameter Fine-Tuning of Large Language Modelsby Yifan Yang, Jiajun…
Speculative Streaming: Fast LLM Inference without Auxiliary Modelsby Nikhil Bhendawade, Irina Belousova, Qichen Fu, Henry…
TuneTables: Context Optimization for Scalable Prior-Data Fitted Networksby Benjamin Feuer, Robin Tibor Schirrmeister, Valeriia Cherepanova,…
Empowering Federated Learning for Massive Models with NVIDIA FLAREby Holger R. Roth, Ziyue Xu, Yuan-Ting…
LoRA-drop: Efficient LoRA Parameter Pruning based on Output Evaluationby Hongyun Zhou, Xiangyu Lu, Wang Xu,…
Learning to Route Among Specialized Experts for Zero-Shot Generalizationby Mohammed Muqeeth, Haokun Liu, Yufan Liu,…
A Sober Look at LLMs for Material Discovery: Are They Actually Good for Bayesian Optimization…
L4Q: Parameter Efficient Quantization-Aware Fine-Tuning on Large Language Modelsby Hyesung Jeon, Yulhwa Kim, Jae-joon KimFirst…
Parameter-Efficient Fine-Tuning for Pre-Trained Vision Models: A Surveyby Yi Xin, Siqi Luo, Haodi Zhou, Junlong…