Summary of Optimizing Tensor Computation Graphs with Equality Saturation and Monte Carlo Tree Search, by Jakob Hartmann et al.
Optimizing Tensor Computation Graphs with Equality Saturation and Monte Carlo Tree Search
by Jakob Hartmann, Guoliang He, Eiko Yoneki
First submitted to arxiv on: 7 Oct 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper presents a novel approach for optimizing deep neural networks’ latency while preserving their performance, focusing on the phase-ordering problem that arises when sequentially rewriting input computation graphs. The authors propose a tensor graph rewriting method using Monte Carlo tree search to construct superior intermediate representations (IRs), which can identify the most promising rewrite rules. Additionally, they introduce an extraction algorithm providing fast and accurate runtime estimates for tensor programs in IRs. Compared to existing methods, this approach achieves up to 11% improvement in inference speedup. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The paper is about finding a way to make deep learning models work faster without losing their ability to do things correctly. Right now, there are techniques that can rewrite the rules of how these models process information to make them faster. However, this approach has limitations because it’s hard to find the best combination of rewrite rules. The authors propose a new method using a search algorithm and an extraction algorithm to solve this problem and make the models run even faster. |
Keywords
» Artificial intelligence » Deep learning » Inference