Loading Now

Summary of Perft: Parameter-efficient Routed Fine-tuning For Mixture-of-expert Model, by Yilun Liu et al.


PERFT: Parameter-Efficient Routed Fine-Tuning for Mixture-of-Expert Model

by Yilun Liu, Yunpu Ma, Shuo Chen, Zifeng Ding, Bailan He, Zhen Han, Volker Tresp

First submitted to arxiv on: 12 Nov 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The Mixture-of-Experts (MoE) paradigm has been shown to be a powerful approach for scaling transformers with improved resource utilization. However, efficiently fine-tuning MoE models remains largely underexplored. This paper presents a unified framework for integrating Parameter-Efficient Fine-Tuning (PEFT) modules directly into the MoE mechanism. The framework includes various design dimensions, such as functional and composition strategies. By combining these design choices, the authors introduce Parameter-Efficient Routed Fine-Tuning (PERFT) as a flexible and scalable family of PEFT strategies tailored for MoE models. The effectiveness of PERFT is demonstrated through extensive experiments on adapting OLMoE-1B-7B and Mixtral-87B for commonsense and arithmetic reasoning tasks.
Low GrooveSquid.com (original content) Low Difficulty Summary
MoE models are a type of transformer that can be fine-tuned to do many different tasks. But making these models work efficiently is still an open problem. This paper solves this problem by creating a way to combine two techniques: MoE and Parameter-Efficient Fine-Tuning (PEFT). The new technique, called PERFT, makes it possible to fine-tune MoE models in a way that’s both efficient and effective. The authors tested PERFT on several tasks, like answering questions about the world or doing math problems, and showed that it works really well.

Keywords

» Artificial intelligence  » Fine tuning  » Mixture of experts  » Parameter efficient  » Transformer