Summary of Upcycling Large Language Models Into Mixture Of Experts, by Ethan He et al.
Upcycling Large Language Models into Mixture of Experts
by Ethan He, Abhinav Khattar, Ryan Prenger, Vijay Korthikanti, Zijie Yan, Tong Liu, Shiqing Fan, Ashwath Aithal, Mohammad Shoeybi, Bryan Catanzaro
First submitted to arxiv on: 10 Oct 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper explores efficient ways to increase the capacity of pre-trained dense language models by converting them into sparse mixture-of-experts (MoE) models. The authors propose a novel initialization scheme and weight scaling approach to enable fine-grained MoE architectures, outperforming continued dense model training through ablations. They also compare softmax-then-topK expert routing with topK-then-softmax and find that higher granularity MoEs can improve accuracy. On the Nemotron-4 15B language model, upcycling achieves 67.6% MMLU compared to 65.3% for continued training. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper takes a pre-trained language model and makes it better by breaking it down into smaller pieces that work together. It’s like a team effort! The researchers found ways to make this process more efficient, which leads to better results. They also tested different methods to see what works best. In the end, they were able to make a big improvement in how well the model can understand language. |
Keywords
» Artificial intelligence » Language model » Mixture of experts » Softmax