Loading Now

Summary of Learn From Failure: Fine-tuning Llms with Trial-and-error Data For Intuitionistic Propositional Logic Proving, by Chenyang An et al.


Learn from Failure: Fine-Tuning LLMs with Trial-and-Error Data for Intuitionistic Propositional Logic Proving

by Chenyang An, Zhibo Chen, Qihao Ye, Emily First, Letian Peng, Jiayun Zhang, Zihan Wang, Sorin Lerner, Jingbo Shang

First submitted to arxiv on: 10 Apr 2024

Categories

  • Main: Artificial Intelligence (cs.AI)
  • Secondary: Logic in Computer Science (cs.LO)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
A large language model is used to generate tactics for automated theorem proving, but its effectiveness depends on sampling and trying various proof steps until finding success. This approach faces a discrepancy at the inference stage, as it doesn’t learn from failed attempts like its training does. To address this issue, we propose training models that learn from both successful and failed search paths. We curate a dataset of intuitionistic propositional logic theorems in Lean, which allows us to reliably check proof correctness. Our TrialMaster model, trained on trial-and-error information, outperforms models trained only on correct paths, solving more unseen theorems with lower trial searches.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper uses a big language model to help computers prove mathematical theorems. It’s like trying different puzzle pieces until you find the right one. But it has a problem because it doesn’t learn from when it makes mistakes. The authors created a special dataset of math problems and made sure they are correct, then trained the computer to learn from both successes and failures. This new model can solve more tricky math problems with less effort.

Keywords

» Artificial intelligence  » Inference  » Language model  » Large language model