Summary of Learning to Compile Programs to Neural Networks, by Logan Weber et al.
Learning to Compile Programs to Neural Networks
by Logan Weber, Jesse Michel, Alex Renda, Michael Carbin
First submitted to arxiv on: 21 Jul 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper presents a novel technique called “neural surrogate compilation” that enables the creation of neural surrogates for programs directly from their text representation, without requiring separate training and execution steps. This approach leverages language models to consume program text and generate neural surrogates, which can be used for tasks such as automatic tuning of program inputs and adaptation to new settings. The authors demonstrate the effectiveness of this technique by implementing neural surrogate compilers using hypernetworks trained on a dataset of C programs, achieving significant improvements in data efficiency, accuracy, and training time compared to traditional methods. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper is about making computers smarter by creating special kinds of artificial intelligence called “neural surrogates”. These surrogates can help computers do things like optimize how they work or adjust to new situations. Usually, people create these surrogates by showing them lots of examples of what the computer should do. But this paper shows that it’s possible to make these surrogates just by reading the code that makes up the program. This is important because it could make computers smarter and more efficient. |