Loading Now

Summary of A Provably Effective Method For Pruning Experts in Fine-tuned Sparse Mixture-of-experts, by Mohammed Nowaz Rabbani Chowdhury et al.


A Provably Effective Method for Pruning Experts in Fine-tuned Sparse Mixture-of-Experts

by Mohammed Nowaz Rabbani Chowdhury, Meng Wang, Kaoutar El Maghraoui, Naigang Wang, Pin-Yu Chen, Christopher Carothers

First submitted to arxiv on: 26 May 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The sparsely gated mixture of experts (MoE) architecture has shown promise in reducing training computation for large models. However, its deployment can be memory or computation expensive for certain downstream tasks. To address this challenge, the paper introduces a provably efficient technique for pruning experts in finetuned MoE models. The method prioritizes the pruning of experts with a smaller change in l2 norm from the pretrained model, ensuring the preservation of test accuracy while significantly reducing the model size and computational requirements. This approach is theoretically analyzed for binary classification tasks on simplified MoE architectures, but verified on large vision MoE models finetuned on benchmark datasets such as CIFAR10, CIFAR100, and ImageNet.
Low GrooveSquid.com (original content) Low Difficulty Summary
MoE architecture sends different inputs to different subnetworks through trainable routers. To make it more efficient, the paper explores model pruning to reduce inference computation. The method proves that prioritizing expert pruning with a smaller change in l2 norm from the pretrained model keeps test accuracy while reducing model size and computational requirements. This is important for large models like VMoE and E3MoE finetuned on datasets like CIFAR10, CIFAR100, and ImageNet.

Keywords

» Artificial intelligence  » Classification  » Inference  » Mixture of experts  » Pruning