Summary of Openmoe: An Early Effort on Open Mixture-of-experts Language Models, by Fuzhao Xue et al.
OpenMoE: An Early Effort on Open Mixture-of-Experts Language Models
by Fuzhao Xue, Zian Zheng, Yao Fu, Jinjie Ni, Zangwei Zheng, Wangchunshu Zhou, Yang You
First submitted to arxiv on: 29 Jan 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI); Distributed, Parallel, and Cluster Computing (cs.DC); Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The authors introduce OpenMoE, an open-source and reproducible series of Mixture-of-Experts (MoE) based large language models (LLMs), ranging from 650M to 34B parameters. These decoder-only MoE LLMs were trained on up to over 1T tokens. The study shows that MoE-based LLMs can offer a more favorable cost-effectiveness trade-off than dense LLMs, making them a promising direction for future LLM development. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary OpenMoE is a new series of language models that can help us understand how big AI models work better. These models are open-source and can be used by anyone to learn from. The authors trained these models on a huge amount of text data, over 1 trillion tokens! They found that these special models, called MoE-based LLMs, might be more efficient than other models while still being very good at understanding language. |
Keywords
* Artificial intelligence * Decoder * Mixture of experts