Summary of Mixture Of Insightful Experts (mote): the Synergy Of Thought Chains and Expert Mixtures in Self-alignment, by Zhili Liu et al.
Mixture of insighTful Experts (MoTE): The Synergy of Thought Chains and Expert Mixtures in Self-Alignmentby…
Mixture of insighTful Experts (MoTE): The Synergy of Thought Chains and Expert Mixtures in Self-Alignmentby…
HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuningby Chunlin Tian, Zhan Shi, Zhijiang Guo, Li…
FashionSD-X: Multimodal Fashion Garment Synthesis using Latent Diffusionby Abhishek Kumar Singh, Ioannis PatrasFirst submitted to…
MixLoRA: Enhancing Large Language Models Fine-Tuning with LoRA-based Mixture of Expertsby Dengchun Li, Yingzi Ma,…
ID-Aligner: Enhancing Identity-Preserving Text-to-Image Generation with Reward Feedback Learningby Weifeng Chen, Jiacheng Zhang, Jie Wu,…
WavLLM: Towards Robust and Adaptive Speech Large Language Modelby Shujie Hu, Long Zhou, Shujie Liu,…
Hyacinth6B: A large language model for Traditional Chineseby Chih-Wei Song, Yin-Te TsaiFirst submitted to arxiv…
Matrix-Transformation Based Low-Rank Adaptation (MTLoRA): A Brain-Inspired Method for Parameter-Efficient Fine-Tuningby Yao Liang, Yuwei Wang,…
Block-wise LoRA: Revisiting Fine-grained LoRA for Effective Personalization and Stylization in Text-to-Image Generationby Likun Li,…
Mixture-of-LoRAs: An Efficient Multitask Tuning for Large Language Modelsby Wenfeng Feng, Chuzhan Hao, Yuewei Zhang,…