Summary of On the Learn-to-optimize Capabilities Of Transformers in In-context Sparse Recovery, by Renpu Liu et al.
On the Learn-to-Optimize Capabilities of Transformers in In-Context Sparse Recoveryby Renpu Liu, Ruida Zhou, Cong…
On the Learn-to-Optimize Capabilities of Transformers in In-Context Sparse Recoveryby Renpu Liu, Ruida Zhou, Cong…
Personalized Adaptation via In-Context Preference Learningby Allison Lau, Younwoo Choi, Vahid Balazadeh, Keertana Chidambaram, Vasilis…
Analyzing Deep Transformer Models for Time Series Forecasting via Manifold Learningby Ilya Kaufman, Omri AzencotFirst…
Adversarial Testing as a Tool for Interpretability: Length-based Overfitting of Elementary Functions in Transformersby Patrik…
LightTransfer: Your Long-Context LLM is Secretly a Hybrid Model with Effortless Adaptationby Xuan Zhang, Fengzhuo…
Movie Gen: A Cast of Media Foundation Modelsby Adam Polyak, Amit Zohar, Andrew Brown, Andros…
Reducing the Transformer Architecture to a Minimumby Bernhard Bermeitinger, Tomas Hrycej, Massimo Pavone, Julianus Kath,…
Enhancing Text Generation in Joint NLG/NLU Learning Through Curriculum Learning, Semi-Supervised Training, and Advanced Optimization…
Text-Guided Multi-Property Molecular Optimization with a Diffusion Language Modelby Yida Xiong, Kun Li, Weiwei Liu,…
Transformer-Based Approaches for Sensor-Based Human Activity Recognition: Opportunities and Challengesby Clayton Souza Leite, Henry Mauranen,…