Loading Now

Summary of Threshold Filtering Packing For Supervised Fine-tuning: Training Related Samples Within Packs, by Jiancheng Dong et al.


by Jiancheng Dong, Lei Jiang, Wei Jin, Lu Cheng

First submitted to arxiv on: 18 Aug 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Computation and Language (cs.CL)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper tackles the challenge of supervised fine-tuning (SFT) in autoregressive models, a crucial step in generating sequences of varying lengths. The conventional approach involves concatenating data points until reaching the designed maximum length to facilitate GPU processing, but this can lead to cross-contamination of sequences due to differing subject matters. To address these challenges, the authors introduce Threshold Filtering Packing (TFP), a method that selects samples with related context while maintaining sufficient diversity within the same pack. The results show that TFP offers a simple-to-implement and scalable approach that significantly enhances SFT performance, with observed improvements of up to 7% on GSM8K and 4% on HumanEval. Additionally, TFP demonstrates promising performance in improving fairness while boosting prediction accuracy by 15%.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper helps make computers better at understanding text by fine-tuning their models. When doing this, they can accidentally mix up different types of information, which makes things worse. To fix this problem, the authors developed a new way to prepare the data called Threshold Filtering Packing (TFP). This method selects similar pieces of information and puts them together in a way that keeps everything organized. The results show that TFP makes computers better at understanding text and also helps make sure they’re not biased towards certain types of information.

Keywords

» Artificial intelligence  » Autoregressive  » Boosting  » Fine tuning  » Supervised