Summary of Mind the Gap Between Prototypes and Images in Cross-domain Finetuning, by Hongduan Tian et al.
Mind the Gap Between Prototypes and Images in Cross-domain Finetuningby Hongduan Tian, Feng Liu, Zhanke…
Mind the Gap Between Prototypes and Images in Cross-domain Finetuningby Hongduan Tian, Feng Liu, Zhanke…
Retrieval-Reasoning Large Language Model-based Synthetic Clinical Trial Generationby Zerui Xu, Fang Wu, Yuanyuan Zhang, Yue…
SAC-GLAM: Improving Online RL for LLM agents with Soft Actor-Critic and Hindsight Relabelingby Loris Gaven,…
Model Balancing Helps Low-data Training and Fine-tuningby Zihang Liu, Yuanzhe Hu, Tianyu Pang, Yefan Zhou,…
ExoTST: Exogenous-Aware Temporal Sequence Transformer for Time Series Predictionby Kshitij Tayal, Arvind Renganathan, Xiaowei Jia,…
DAQ: Density-Aware Post-Training Weight-Only Quantization For LLMsby Yingsong Luo, Ling ChenFirst submitted to arxiv on:…
Potential-Based Intrinsic Motivation: Preserving Optimality With Complex, Non-Markovian Shaping Rewardsby Grant C. Forbes, Leonardo Villalobos-Arias,…
Abnormality Forecasting: Time Series Anomaly Prediction via Future Context Modelingby Sinong Zhao, Wenrui Wang, Hongzuo…
Divide-Verify-Refine: Can LLMs Self-Align with Complex Instructions?by Xianren Zhang, Xianfeng Tang, Hui Liu, Zongyu Wu,…
Global Censored Quantile Random Forestby Siyu Zhou, Limin PengFirst submitted to arxiv on: 16 Oct…