Loading Now

Summary of Olmoe: Open Mixture-of-experts Language Models, by Niklas Muennighoff et al.


OLMoE: Open Mixture-of-Experts Language Models

by Niklas Muennighoff, Luca Soldaini, Dirk Groeneveld, Kyle Lo, Jacob Morrison, Sewon Min, Weijia Shi, Pete Walsh, Oyvind Tafjord, Nathan Lambert, Yuling Gu, Shane Arora, Akshita Bhagia, Dustin Schwenk, David Wadden, Alexander Wettig, Binyuan Hui, Tim Dettmers, Douwe Kiela, Ali Farhadi, Noah A. Smith, Pang Wei Koh, Amanpreet Singh, Hannaneh Hajishirzi

First submitted to arxiv on: 3 Sep 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper introduces OLMoE, a state-of-the-art language model that leverages sparse Mixture-of-Experts (MoE). The model, OLMoE-1B-7B, has 7 billion parameters but only uses 1 billion per input token. Pretrained on 5 trillion tokens and adapted to create OLMoE-1B-7B-Instruct, the models outperform those with similar active parameters, even surpassing larger ones like Llama2-13B-Chat and DeepSeekMoE-16B. The paper presents experiments on MoE training, analyzes routing in the model showing high specialization, and open-sources all aspects of the work: model weights, training data, code, and logs.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper introduces a new language model called OLMoE that uses something called Mixture-of-Experts (MoE). This model is special because it’s very efficient, using only 1 billion “pieces” to process each piece of text. The researchers pre-trained the model on a huge amount of text data and then adapted it for specific tasks. They found that their model performs better than other similar models, even those with many more “pieces”. The paper also explores how the model works and why it’s so good at understanding language.

Keywords

» Artificial intelligence  » Language model  » Mixture of experts  » Token