Summary of Flux: Fast Software-based Communication Overlap on Gpus Through Kernel Fusion, by Li-wen Chang et al.
FLUX: Fast Software-based Communication Overlap On GPUs Through Kernel Fusion
by Li-Wen Chang, Wenlei Bao, Qi Hou, Chengquan Jiang, Ningxin Zheng, Yinmin Zhong, Xuanrun Zhang, Zuquan Song, Chengji Yao, Ziheng Jiang, Haibin Lin, Xin Jin, Xin Liu
First submitted to arxiv on: 11 Jun 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Distributed, Parallel, and Cluster Computing (cs.DC)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper proposes a novel method called Flux that aims to improve the scalability of large deep learning models by hiding communication latencies between GPUs. The authors argue that current tensor parallelism techniques, which partition computation across devices, introduce significant communication overhead that limits their effectiveness in high-speed interconnects like NVLinks. To address this issue, Flux over-decomposes and fuses communication and computation operations into finer-grained kernels, allowing for up to 96% of communication overlap. This results in speedups of up to 1.24x for training Megatron-LM on a cluster of 128 GPUs and up to 1.66x and 1.30x for prefill and decoding inference with vLLM on a cluster of 8 GPUs. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Large deep learning models can solve many tasks, but they often require training and inference distributed across multiple devices. One technique used to speed up these computations is tensor parallelism, which breaks down an operation or layer into smaller parts that can be processed simultaneously by different devices. However, this approach introduces extra communication between the devices, which can slow things down if the devices are not close enough together. The authors of this paper propose a new way called Flux to hide these communication delays so they don’t slow down the computation. |
Keywords
» Artificial intelligence » Deep learning » Inference