Summary of Self-moe: Towards Compositional Large Language Models with Self-specialized Experts, by Junmo Kang et al.
Self-MoE: Towards Compositional Large Language Models with Self-Specialized Expertsby Junmo Kang, Leonid Karlinsky, Hongyin Luo,…
Self-MoE: Towards Compositional Large Language Models with Self-Specialized Expertsby Junmo Kang, Leonid Karlinsky, Hongyin Luo,…
Not All Prompts Are Made Equal: Prompt-based Pruning of Text-to-Image Diffusion Modelsby Alireza Ganjdanesh, Reza…
Soft Prompting for Unlearning in Large Language Modelsby Karuna Bhaila, Minh-Hao Van, Xintao WuFirst submitted…
Iterative Length-Regularized Direct Preference Optimization: A Case Study on Improving 7B Language Models to GPT-4…
Spectral Introspection Identifies Group Training Dynamics in Deep Neural Networks for Neuroimagingby Bradley T. Baker,…
Stochastic Neural Network Symmetrisation in Markov Categoriesby Rob CornishFirst submitted to arxiv on: 17 Jun…
WPO: Enhancing RLHF with Weighted Preference Optimizationby Wenxuan Zhou, Ravi Agrawal, Shujian Zhang, Sathish Reddy…
Learning sum of diverse features: computational hardness and efficient gradient-based training for ridge combinationsby Kazusato…
MMDU: A Multi-Turn Multi-Image Dialog Understanding Benchmark and Instruction-Tuning Dataset for LVLMsby Ziyu Liu, Tao…
mDPO: Conditional Preference Optimization for Multimodal Large Language Modelsby Fei Wang, Wenxuan Zhou, James Y.…