Summary of Plamo-100b: a Ground-up Language Model Designed For Japanese Proficiency, by Preferred Elements: Kenshin Abe et al.
PLaMo-100B: A Ground-Up Language Model Designed for Japanese Proficiency
by Preferred Elements, Kenshin Abe, Kaizaburo Chubachi, Yasuhiro Fujita, Yuta Hirokawa, Kentaro Imajo, Toshiki Kataoka, Hiroyoshi Komatsu, Hiroaki Mikami, Tsuguo Mogami, Shogo Murai, Kosuke Nakago, Daisuke Nishino, Toru Ogawa, Daisuke Okanohara, Yoshihiko Ozaki, Shotaro Sano, Shuji Suzuki, Tianqi Xu, Toshihiko Yanase
First submitted to arxiv on: 10 Oct 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary A novel large-scale language model, PLaMo-100B, is introduced for Japanese proficiency evaluation. The model is trained from scratch on 2 trillion tokens using QK Normalization and Z-Loss to ensure stable training. Post-training techniques, including Supervised Fine-Tuning and Direct Preference Optimization, are applied to refine performance. Benchmark evaluations demonstrate competitive results with frontier models like GPT-4, particularly in Japanese-specific tasks. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary PLaMo-100B is a new language model designed to test Japanese skills. It was trained on a huge amount of data using special techniques to keep the training process stable. The model’s performance was improved using more advanced methods. When compared to other top models like GPT-4, PLaMo-100B does well, especially in tasks focused on Japanese. |
Keywords
» Artificial intelligence » Fine tuning » Gpt » Language model » Optimization » Supervised