Loading Now

Summary of Generative Pre-trained Transformer For Symbolic Regression Base In-context Reinforcement Learning, by Yanjie Li et al.


Generative Pre-Trained Transformer for Symbolic Regression Base In-Context Reinforcement Learning

by Yanjie Li, Weijun Li, Lina Yu, Min Wu, Jingyi Liu, Wenqiang Li, Meilan Hao, Shu Wei, Yusong Deng

First submitted to arxiv on: 9 Apr 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper proposes FormulaGPT, a novel method that combines the strengths of symbolic regression (SR) algorithms based on genetic programming (GP) or reinforcement learning (RL) with the speed and scalability of Generative Pre-Trained Transformers (GPT). By training GPT using massive sparse reward learning histories from RL-based SR algorithms, the proposed method distills an RL process into a Transformer that can automatically update its policy in context. This approach achieves state-of-the-art performance on over ten datasets, including SRBench, and outperforms four baselines in terms of fitting ability, noise robustness, versatility, and inference efficiency.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper develops a new method called FormulaGPT to find mathematical formulas from observational data using artificial intelligence. It combines the strengths of two existing methods: genetic programming and reinforcement learning. The goal is to create an algorithm that can quickly find formulas while also being good at handling noisy or unexpected data. The proposed method uses massive amounts of training data to teach a special type of neural network called a Transformer. This allows the algorithm to learn from its mistakes and adapt to new situations more effectively. The results show that FormulaGPT outperforms existing methods in finding mathematical formulas and is more robust to noise.

Keywords

» Artificial intelligence  » Gpt  » Inference  » Neural network  » Regression  » Reinforcement learning  » Transformer