Loading Now

Summary of Fp6-llm: Efficiently Serving Large Language Models Through Fp6-centric Algorithm-system Co-design, by Haojun Xia et al.


FP6-LLM: Efficiently Serving Large Language Models Through FP6-Centric Algorithm-System Co-Design

by Haojun Xia, Zhen Zheng, Xiaoxia Wu, Shiyang Chen, Zhewei Yao, Stephen Youn, Arash Bakhtiari, Michael Wyatt, Donglin Zhuang, Zhongzhu Zhou, Olatunji Ruwase, Yuxiong He, Shuaiwen Leon Song

First submitted to arxiv on: 25 Jan 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Hardware Architecture (cs.AR)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed TC-FPx kernel design scheme enables the first full-stack GPU kernel for float-point weights with unified Tensor Core support for various quantization bit-widths. This addresses the challenges of unfriendly memory access and high runtime overhead of weight de-quantization in existing systems. The integration of TC-FPx into an inference system provides end-to-end support for quantized LLM inference, achieving better trade-offs between inference cost and model quality. Experiments show that FP6-LLM enables the inference of LLaMA-70b using a single GPU, with 1.69x-2.65x higher normalized inference throughput than the FP16 baseline.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper helps reduce the size of large language models (LLMs) while preserving their quality across various applications. It does this by creating a new way to use GPUs for LLMs that have been shrunk using six-bit quantization. The new approach, called TC-FPx, makes it easier and faster to do calculations on these smaller models.

Keywords

* Artificial intelligence  * Inference  * Llama  * Quantization