Loading Now

Summary of Adamoe: Token-adaptive Routing with Null Experts For Mixture-of-experts Language Models, by Zihao Zeng et al.


AdaMoE: Token-Adaptive Routing with Null Experts for Mixture-of-Experts Language Models

by Zihao Zeng, Yibo Miao, Hongcheng Gao, Hao Zhang, Zhijie Deng

First submitted to arxiv on: 19 Jun 2024

Categories

  • Main: Artificial Intelligence (cs.AI)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper proposes AdaMoE, a novel approach for constructing large language models (LLMs) using the mixture of experts (MoE) method. Unlike existing MoE methods, which enforce a constant top-k routing for all tokens, AdaMoE allows different tokens to select a varying number of experts for feature abstraction. This is achieved by introducing a fixed number of null experts and increasing the value of k, while ensuring load balancing through a loss function. The proposed method exhibits similarities to MoEs with expert choice routing but enables trivial auto-regressive modeling. AdaMoE can be easily implemented and applied to pre-trained LLMs. Experimental results show that AdaMoE reduces average expert load (FLOPs) by 14.5% while increasing accuracy by 1.69% on the ARC-C dataset, when fine-tuning Mixtral-8x7B.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper introduces a new way to make language models work better. Currently, these models use a combination of many small pieces (experts) to help them understand text. The problem is that each piece might need different amounts of help from the other pieces. AdaMoE solves this by allowing each piece to choose how much help it needs from the others. This helps the model use its resources more efficiently and get better results. The researchers tested this idea on a language understanding task and found that it worked well, reducing the amount of work the model had to do while still getting good results.

Keywords

» Artificial intelligence  » Fine tuning  » Language understanding  » Loss function  » Mixture of experts