Summary of Symbolic Regression with a Learned Concept Library, by Arya Grayeli et al.
Symbolic Regression with a Learned Concept Library
by Arya Grayeli, Atharva Sehgal, Omar Costilla-Reyes, Miles Cranmer, Swarat Chaudhuri
First submitted to arxiv on: 14 Sep 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI); Neural and Evolutionary Computing (cs.NE); Symbolic Computation (cs.SC)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper presents LaSR, a novel method for symbolic regression (SR), which searches for compact programmatic hypotheses that best explain a dataset. The authors enhance genetic algorithms by inducing a library of abstract textual concepts using zero-shot queries to a large language model (LLM). The algorithm combines standard evolutionary steps with LLM-guided steps conditioned on discovered concepts. LaSR is validated on the Feynman equations, a popular SR benchmark, as well as synthetic tasks. Results show that LaSR substantially outperforms state-of-the-art SR approaches based on deep learning and evolutionary algorithms. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper helps us better understand how to find simple explanations for complex data. It presents a new way of doing this called LaSR, which uses a combination of machine learning and old-fashioned genetic algorithms. The idea is to use zero-shot queries (where the algorithm asks questions without knowing the answers) to a large language model to discover and evolve concepts that help us find better explanations. The method is tested on some classic benchmark problems and shows great results. It even helps us find a new scaling law for big language models! |
Keywords
» Artificial intelligence » Deep learning » Large language model » Machine learning » Regression » Zero shot