Summary of Transformers Handle Endogeneity in In-context Linear Regression, by Haodong Liang et al.
Transformers Handle Endogeneity in In-Context Linear Regressionby Haodong Liang, Krishnakumar Balasubramanian, Lifeng LaiFirst submitted to…
Transformers Handle Endogeneity in In-Context Linear Regressionby Haodong Liang, Krishnakumar Balasubramanian, Lifeng LaiFirst submitted to…
Scaling Offline Model-Based RL via Jointly-Optimized World-Action Model Pretrainingby Jie Cheng, Ruixi Qiao, Yingwei Ma,…
Evaluating the fairness of task-adaptive pretraining on unlabeled test data before few-shot text classificationby Kush…
Using pretrained graph neural networks with token mixers as geometric featurizers for conformational dynamicsby Zihan…
On the Inductive Bias of Stacking Towards Improving Reasoningby Nikunj Saunshi, Stefani Karp, Shankar Krishnan,…
SOAR: Self-supervision Optimized UAV Action Recognition with Efficient Object-Aware Pretrainingby Ruiqi Xian, Xiyang Wu, Tianrui…
DMC-VB: A Benchmark for Representation Learning for Control with Visual Distractorsby Joseph Ortiz, Antoine Dedieu,…
Self-supervised Pretraining for Cardiovascular Magnetic Resonance Cine Segmentationby Rob A. J. de Mooij, Josien P.…
RmGPT: Rotating Machinery Generative Pretrained Modelby Yilin Wang, Yifei Yu, Kong Sun, Peixuan Lei, Yuxuan…
EMIT- Event-Based Masked Auto Encoding for Irregular Time Seriesby Hrishikesh Patel, Ruihong Qiu, Adam Irwin,…