Summary of Sageattention: Accurate 8-bit Attention For Plug-and-play Inference Acceleration, by Jintao Zhang et al.
SageAttention: Accurate 8-Bit Attention for Plug-and-play Inference Accelerationby Jintao Zhang, Jia wei, Haofeng Huang, Pengle…
SageAttention: Accurate 8-Bit Attention for Plug-and-play Inference Accelerationby Jintao Zhang, Jia wei, Haofeng Huang, Pengle…
Unveiling AI’s Blind Spots: An Oracle for In-Domain, Out-of-Domain, and Adversarial Errorsby Shuangpeng Han, Mengmi…
MenakBERT – Hebrew Diacriticizerby Ido Cohen, Jacob Gidron, Idan PintoFirst submitted to arxiv on: 3…
HATFormer: Historic Handwritten Arabic Text Recognition with Transformersby Adrian Chan, Anupam Mijar, Mehreen Saeed, Chau-Wai…
EC-DIT: Scaling Diffusion Transformers with Adaptive Expert-Choice Routingby Haotian Sun, Tao Lei, Bowen Zhang, Yanghao…
TrajGPT: Irregular Time-Series Representation Learning for Health Trajectory Analysisby Ziyang Song, Qingcheng Lu, He Zhu,…
A Formal Framework for Understanding Length Generalization in Transformersby Xinting Huang, Andy Yang, Satwik Bhattamishra,…
DeepProtein: Deep Learning Library and Benchmark for Protein Sequence Learningby Jiaqing Xie, Yue Zhao, Tianfan…
Positional Attention: Expressivity and Learnability of Algorithmic Computationby Artur Back de Luca, George Giapitzakis, Shenghao…
Trained Transformer Classifiers Generalize and Exhibit Benign Overfitting In-Contextby Spencer Frei, Gal VardiFirst submitted to…