Loading Now

Summary of Mitigating Tail Narrowing in Llm Self-improvement Via Socratic-guided Sampling, by Yiwen Ding et al.


Mitigating Tail Narrowing in LLM Self-Improvement via Socratic-Guided Sampling

by Yiwen Ding, Zhiheng Xi, Wei He, Zhuoyuan Li, Yitao Zhai, Xiaowei Shi, Xunliang Cai, Tao Gui, Qi Zhang, Xuanjing Huang

First submitted to arxiv on: 1 Nov 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper introduces Guided Self-Improvement (GSI), a novel strategy to enhance the self-improving abilities of large language models (LLMs). LLMs can iteratively train on filtered rationales, but their performance often plateaus due to an imbalance in sampling. Easy queries are over-sampled, while difficult ones are under-sampled, leading to a long-tail distribution where solutions for challenging queries dwindle. To address this issue, GSI leverages Socratic-style guidance signals to help LLMs reason with complex queries, reducing exploration effort and computational overhead. Empirical evaluations on four models across diverse mathematical tasks demonstrate that GSI achieves a balance between performance and efficiency, outperforming brute-force sampling while being effective on held-out tasks.
Low GrooveSquid.com (original content) Low Difficulty Summary
Imagine you’re trying to solve math problems, but your computer program is getting stuck on the hard ones. It’s doing okay on easy ones, but it’s not making progress on the harder ones. This paper figures out how to help these programs by giving them hints when they get stuck. The program can then use those hints to learn and improve faster. They tested this idea with four different math-solving programs and found that it works really well, especially for the harder problems. This is a big step forward in making computer programs better at solving complex problems.

Keywords

* Artificial intelligence