Summary of Understanding and Mitigating Miscalibration in Prompt Tuning For Vision-language Models, by Shuoyuan Wang et al.
Understanding and Mitigating Miscalibration in Prompt Tuning for Vision-Language Modelsby Shuoyuan Wang, Yixuan Li, Hongxin…
Understanding and Mitigating Miscalibration in Prompt Tuning for Vision-Language Modelsby Shuoyuan Wang, Yixuan Li, Hongxin…
DailyDilemmas: Revealing Value Preferences of LLMs with Quandaries of Daily Lifeby Yu Ying Chiu, Liwei…
Lie Algebra Canonicalization: Equivariant Neural Operators under arbitrary Lie Groupsby Zakhar Shumaylov, Peter Zaika, James…
Extracting Training Data from Unconditional Diffusion Modelsby Yunhao Chen, Shujie Wang, Difan Zou, Xingjun MaFirst…
Personalized Federated Learning for Generative AI-Assisted Semantic Communicationsby Yubo Peng, Feibo Jiang, Li Dong, Kezhi…
Meta-Models: An Architecture for Decoding LLM Behaviors Through Interpreted Embeddings and Natural Languageby Anthony Costarelli,…
Stochastic variance-reduced Gaussian variational inference on the Bures-Wasserstein manifoldby Hoang Phuc Hau Luu, Hanlin Yu,…
Online Convex Optimization with a Separation Oracleby Zakaria MhammediFirst submitted to arxiv on: 3 Oct…
Efficient learning of differential network in multi-source non-paranormal graphical modelsby Mojtaba Nikahd, Seyed Abolfazl MotahariFirst…
Dynamic Gradient Alignment for Online Data Mixingby Simin Fan, David Grangier, Pierre AblinFirst submitted to…