Loading Now

Summary of Mixlora: Enhancing Large Language Models Fine-tuning with Lora-based Mixture Of Experts, by Dengchun Li and Yingzi Ma and Naizheng Wang and Zhengmao Ye and Zhiyuan Cheng and Yinghao Tang and Yan Zhang and Lei Duan and Jie Zuo and Cal Yang and Mingjie Tang


MixLoRA: Enhancing Large Language Models Fine-Tuning with LoRA-based Mixture of Experts

by Dengchun Li, Yingzi Ma, Naizheng Wang, Zhengmao Ye, Zhiyuan Cheng, Yinghao Tang, Yan Zhang, Lei Duan, Jie Zuo, Cal Yang, Mingjie Tang

First submitted to arxiv on: 22 Apr 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed paper introduces MixLoRA, a novel approach for constructing resource-efficient sparse Mixture-of-Expert (MoE) models based on LoRA. By inserting multiple LoRA-based experts within the feed-forward network block of a frozen pre-trained dense model and employing a commonly used top-k router, MixLoRA enhances model performance while maintaining reduced parameter counts. Additionally, an auxiliary load balance loss is proposed to address the imbalance problem of the router. The paper presents evaluations showing that MixLoRA improves about 9% accuracy compared to state-of-the-art PEFT methods in multi-task learning scenarios.
Low GrooveSquid.com (original content) Low Difficulty Summary
MixLoRA is a new way to make language models work better and use less memory. Right now, we have big models that can do lots of things, but they take up too much computer power. MixLoRA makes smaller models that still work well, using something called LoRA. This helps with tasks like translating languages or answering questions. The new approach also tries to fix a problem where some parts of the model get more attention than others. Tests show that MixLoRA works better than other similar ideas and uses less computer power.

Keywords

* Artificial intelligence  * Attention  * Lora  * Multi task