Summary of Grounding Large Language Models in Embodied Environment with Imperfect World Models, by Haolan Liu et al.
Grounding Large Language Models In Embodied Environment With Imperfect World Modelsby Haolan Liu, Jishen ZhaoFirst…
Grounding Large Language Models In Embodied Environment With Imperfect World Modelsby Haolan Liu, Jishen ZhaoFirst…
Revisit Large-Scale Image-Caption Data in Pre-training Multimodal Foundation Modelsby Zhengfeng Lai, Vasileios Saveris, Chen Chen,…
Salient Information Prompting to Steer Content in Prompt-based Abstractive Summarizationby Lei Xu, Mohammed Asad Karim,…
Neutral residues: revisiting adapters for model extensionby Franck Signe Talla, Herve Jegou, Edouard GraveFirst submitted…
Contrastive Localized Language-Image Pre-Trainingby Hong-You Chen, Zhengfeng Lai, Haotian Zhang, Xinze Wang, Marcin Eichner, Keen…
CriSPO: Multi-Aspect Critique-Suggestion-guided Automatic Prompt Optimization for Text Generationby Han He, Qianchu Liu, Lei Xu,…
Training Language Models on Synthetic Edit Sequences Improves Code Synthesisby Ulyana Piterbarg, Lerrel Pinto, Rob…
An Online Automatic Modulation Classification Scheme Based on Isolation Distributional Kernelby Xinpeng Li, Zile Jiang,…
ReLIC: A Recipe for 64k Steps of In-Context Reinforcement Learning for Embodied AIby Ahmad Elawady,…
GPT-4o as the Gold Standard: A Scalable and General Purpose Approach to Filter Language Model…