Summary of Flextron: Many-in-one Flexible Large Language Model, by Ruisi Cai et al.
Flextron: Many-in-One Flexible Large Language Modelby Ruisi Cai, Saurav Muralidharan, Greg Heinrich, Hongxu Yin, Zhangyang…
Flextron: Many-in-One Flexible Large Language Modelby Ruisi Cai, Saurav Muralidharan, Greg Heinrich, Hongxu Yin, Zhangyang…
Explore the Limits of Omni-modal Pretraining at Scaleby Yiyuan Zhang, Handong Li, Jing Liu, Xiangyu…
Towards an Improved Understanding and Utilization of Maximum Manifold Capacity Representationsby Rylan Schaeffer, Victor Lecomte,…
PAL: Pluralistic Alignment Framework for Learning from Heterogeneous Preferencesby Daiwei Chen, Yi Chen, Aniket Rege,…
Strategies for Pretraining Neural Operatorsby Anthony Zhou, Cooper Lorsung, AmirPouya Hemmasian, Amir Barati FarimaniFirst submitted…
Tx-LLM: A Large Language Model for Therapeuticsby Juan Manuel Zambrano Chaves, Eric Wang, Tao Tu,…
Towards Lifelong Learning of Large Language Models: A Surveyby Junhao Zheng, Shengjie Qiu, Chengming Shi,…
Meta Learning Text-to-Speech Synthesis in over 7000 Languagesby Florian Lux, Sarina Meyer, Lyonel Behringer, Frank…
MATES: Model-Aware Data Selection for Efficient Pretraining with Data Influence Modelsby Zichun Yu, Spandan Das,…
MMPolymer: A Multimodal Multitask Pretraining Framework for Polymer Property Predictionby Fanmeng Wang, Wentao Guo, Minjie…