Loading Now

Summary of Read-me: Refactorizing Llms As Router-decoupled Mixture Of Experts with System Co-design, by Ruisi Cai et al.


Read-ME: Refactorizing LLMs as Router-Decoupled Mixture of Experts with System Co-Design

by Ruisi Cai, Yeonju Ro, Geon-Woo Kim, Peihao Wang, Babak Ehteshami Bejnordi, Aditya Akella, Zhangyang Wang

First submitted to arxiv on: 24 Oct 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper proposes a novel framework called Read-ME that transforms pre-trained large language models (LLMs) into smaller Mixture-of-Experts (MoE) models, addressing the challenges of inefficient memory management and suboptimal batching during inference. The approach employs activation sparsity to extract experts and introduces a pre-gating router decoupled from the MoE backbone for system-friendly pre-computing and lookahead scheduling. This codesign addresses critical gaps on both algorithmic and system fronts, providing a scalable and efficient alternative for LLM inference in resource-constrained settings. The proposed method outperforms other popular open-source dense models of similar scales, achieving improvements of up to 10.1% on MMLU, and improving mean end-to-end latency up to 6.1%.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper is about a new way to make big language models smaller and faster. It uses a special technique called Read-ME that makes it easier to use these models on devices with limited memory. This is important because many people want to use these models on their phones or tablets, but they can’t because the models are too big. The new method works by finding the most important parts of the model and using them to make smaller versions. It also helps reduce the time it takes to process information. The results show that this method is much better than other methods, making it a great tool for anyone who wants to use language models on devices with limited resources.

Keywords

» Artificial intelligence  » Inference  » Mixture of experts