Loading Now

Summary of Fully-fused Multi-layer Perceptrons on Intel Data Center Gpus, by Kai Yuan et al.


Fully-fused Multi-Layer Perceptrons on Intel Data Center GPUs

by Kai Yuan, Christoph Bauinger, Xiangyi Zhang, Pascal Baehr, Matthias Kirchhart, Darius Dabert, Adrien Tousnakhoff, Pierre Boudier, Michael Paulitsch

First submitted to arxiv on: 26 Mar 2024

Categories

  • Main: Artificial Intelligence (cs.AI)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper presents a SYCL implementation of Multi-Layer Perceptrons (MLPs) optimized for the Intel Data Center GPU Max 1550. The authors minimize slow global memory accesses by maximizing data reuse within the general register file and shared local memory, fusing operations in each layer to increase arithmetic intensity and improve performance, especially for inference. A roofline model demonstrates the significant impact of these optimizations, outperforming a CUDA implementation on Nvidia’s H100 GPU by up to 2.84 in inference and 1.75 in training. The paper also showcases the efficiency of the SYCL implementation in image compression, neural radiance fields, and physics-informed machine learning tasks, outperforming off-the-shelf implementations by factors up to 30.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about making a special kind of artificial intelligence called Multi-Layer Perceptrons (MLPs) work better on certain computers. They did this by finding ways to use the computer’s memory more efficiently, which makes the AI run faster and better. They compared their way of doing things to another way that was already known, and found that theirs worked up to 2.84 times better for some tasks and up to 1.75 times better for others. They also showed how this new way works well for different types of tasks, like compressing images or learning about the world.

Keywords

» Artificial intelligence  » Inference  » Machine learning