Summary of Ascend Hifloat8 Format For Deep Learning, by Yuanyong Luo et al.
Ascend HiFloat8 Format for Deep Learning
by Yuanyong Luo, Zhongxing Zhang, Richard Wu, Hu Liu, Ying Jin, Kai Zheng, Minmin Wang, Zhanying He, Guipeng Hu, Luyao Chen, Tianchi Hu, Junsong Wang, Minqi Chen, Mikhaylov Dmitry, Korviakov Vladimir, Bobrin Maxim, Yuhao Hu, Guanfu Chen, Zeyi Huang
First submitted to arxiv on: 25 Sep 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI); Hardware Architecture (cs.AR)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This preliminary white paper proposes a novel 8-bit floating-point data format called HiFloat8 (HiF8) for deep learning. The format features tapered precision, which means it provides different levels of precision depending on the value being encoded. For normal values, HiF8 uses 7 exponent values with 3-bit mantissa, 8 exponent values with 2-bit mantissa, and 16 exponent values with 1-bit mantissa. This allows for a better balance between precision and dynamic range. Additionally, HiF8 encodes special values, including positive zero and negative zero, which are represented by only one bit-pattern. The format can be used in both forward and backward passes of AI training. HiF8 is demonstrated to be effective through massive simulation results on various neural networks, including traditional neural networks and large language models (LLMs). |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper proposes a new way to store numbers for deep learning called HiFloat8. It’s like a special kind of number that can be used in both the beginning and end stages of training AI. This format is better than others because it provides more precise information when needed, but also uses less space when not needed. The authors will explain how this works and show some examples of how well it performs on different types of AI models. |
Keywords
» Artificial intelligence » Deep learning » Precision