Summary of Exploiting Inter-layer Expert Affinity For Accelerating Mixture-of-experts Model Inference, by Jinghan Yao et al.
Exploiting Inter-Layer Expert Affinity for Accelerating Mixture-of-Experts Model Inference
by Jinghan Yao, Quentin Anthony, Aamir Shafi, Hari Subramoni, Dhabaleswar K., Panda
First submitted to arxiv on: 16 Jan 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI); Distributed, Parallel, and Cluster Computing (cs.DC)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed ExFlow optimization technique significantly accelerates the inference of Mixture of Experts (MoE) models in large language models like Generative Pre-trained Transformer (GPT). By exploiting inter-layer expert affinity, ExFlow reduces the communication bottleneck, achieving up to 67% less cross-GPU routing latency compared to previous methods. This improvement enables a 2.2x increase in inference throughput, outperforming cutting-edge MoE implementations with experts from 8 to 64. The study shows how the model acquires and stabilizes expert affinity during training. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Large language models use Mixture of Experts (MoE) to make predictions. When these models are used on many computers at once, it takes a lot of time for all the computers to talk to each other. This slows down the process. The ExFlow technique helps speed things up by finding connections between experts in different layers of the model. This allows them to work together more efficiently and reduces the amount of information that needs to be shared between computers. |
Keywords
* Artificial intelligence * Gpt * Inference * Mixture of experts * Optimization * Transformer