Summary of Self-moe: Towards Compositional Large Language Models with Self-specialized Experts, by Junmo Kang et al.
Self-MoE: Towards Compositional Large Language Models with Self-Specialized Expertsby Junmo Kang, Leonid Karlinsky, Hongyin Luo,…
Self-MoE: Towards Compositional Large Language Models with Self-Specialized Expertsby Junmo Kang, Leonid Karlinsky, Hongyin Luo,…
RePrompt: Planning by Automatic Prompt Engineering for Large Language Models Agentsby Weizhe Chen, Sven Koenig,…
Watch Every Step! LLM Agent Learning via Iterative Step-Level Process Refinementby Weimin Xiong, Yifan Song,…
AvaTaR: Optimizing LLM Agents for Tool Usage via Contrastive Reasoningby Shirley Wu, Shiyu Zhao, Qian…
Multi-LLM QA with Embodied Explorationby Bhrij Patel, Vishnu Sashank Dorbala, Amrit Singh Bedi, Dinesh ManochaFirst…
Markov Constraint as Large Language Model Surrogateby Alexandre Bonlarron, Jean-Charles RéginFirst submitted to arxiv on:…
LUMA: A Benchmark Dataset for Learning from Uncertain and Multimodal Databy Grigor Bezirganyan, Sana Sellami,…
Explore the Limits of Omni-modal Pretraining at Scaleby Yiyuan Zhang, Handong Li, Jing Liu, Xiangyu…
Decoding the Diversity: A Review of the Indic AI Research Landscapeby Sankalp KJ, Vinija Jain,…
GuardAgent: Safeguard LLM Agents by a Guard Agent via Knowledge-Enabled Reasoningby Zhen Xiang, Linzhi Zheng,…