Summary of Sequence Length Scaling in Vision Transformers For Scientific Images on Frontier, by Aristeidis Tsaris et al.
Sequence Length Scaling in Vision Transformers for Scientific Images on Frontierby Aristeidis Tsaris, Chengming Zhang,…
Sequence Length Scaling in Vision Transformers for Scientific Images on Frontierby Aristeidis Tsaris, Chengming Zhang,…
MLPs Learn In-Context on Regression and Classification Tasksby William L. Tong, Cengiz PehlevanFirst submitted to…
Mind the Gap: A Causal Perspective on Bias Amplification in Prediction & Decision-Makingby Drago Plecko,…
Spectraformer: A Unified Random Feature Framework for Transformerby Duke Nguyen, Aditya Joshi, Flora SalimFirst submitted…
Linking In-context Learning in Transformers to Human Episodic Memoryby Li Ji-An, Corey Y. Zhou, Marcus…
LARS-VSA: A Vector Symbolic Architecture For Learning with Abstract Rulesby Mohamed Mejri, Chandramouli Amarnath, Abhijit…
LoRA-Ensemble: Efficient Uncertainty Modelling for Self-attention Networksby Michelle Halbheer, Dominik J. Mühlematter, Alexander Becker, Dominik…
ZipCache: Accurate and Efficient KV Cache Quantization with Salient Token Identificationby Yefei He, Luoming Zhang,…
Deep Learning Methods for Adjusting Global MFD Speed Estimations to Local Link Configurationsby Zhixiong Jin,…
Enhancing Image Layout Control with Loss-Guided Diffusion Modelsby Zakaria Patel, Kirill SerkhFirst submitted to arxiv…