Summary of Sloth: Scaling Laws For Llm Skills to Predict Multi-benchmark Performance Across Families, by Felipe Maia Polo et al.
Sloth: scaling laws for LLM skills to predict multi-benchmark performance across familiesby Felipe Maia Polo,…
Sloth: scaling laws for LLM skills to predict multi-benchmark performance across familiesby Felipe Maia Polo,…
MoSH: Modeling Multi-Objective Tradeoffs with Soft and Hard Boundsby Edward Chen, Natalie Dullerud, Thomas Niedermayr,…
AlphaVerus: Bootstrapping Formally Verified Code Generation through Self-Improving Translation and Treefinementby Pranjal Aggarwal, Bryan Parno,…
Exploring Multi-Grained Concept Annotations for Multimodal Large Language Modelsby Xiao Xu, Tianhao Niu, Yuxi Xie,…
Text-to-3D Gaussian Splatting with Physics-Grounded Motion Generationby Wenqing Wang, Yun FuFirst submitted to arxiv on:…
Multi-Armed Bandit Approach for Optimizing Training on Synthetic Databy Abdulrahman Kerim, Leandro Soriano Marcolino, Erickson…
Multi-Objective Alignment of Large Language Models Through Hypervolume Maximizationby Subhojyoti Mukherjee, Anusha Lalitha, Sailik Sengupta,…
Gla-AI4BioMed at RRG24: Visual Instruction-tuned Adaptation for Radiology Report Generationby Xi Zhang, Zaiqiao Meng, Jake…
ALMA: Alignment with Minimal Annotationby Michihiro Yasunaga, Leonid Shamis, Chunting Zhou, Andrew Cohen, Jason Weston,…
Multi-Bin Batching for Increasing LLM Inference Throughputby Ozgur Guldogan, Jackson Kunde, Kangwook Lee, Ramtin PedarsaniFirst…