Loading Now

Summary of Kun: Answer Polishment For Chinese Self-alignment with Instruction Back-translation, by Tianyu Zheng et al.


Kun: Answer Polishment for Chinese Self-Alignment with Instruction Back-Translation

by Tianyu Zheng, Shuyue Guo, Xingwei Qu, Jiawei Guo, Xinrun Du, Qi Jia, Chenghua Lin, Wenhao Huang, Jie Fu, Ge Zhang

First submitted to arxiv on: 12 Jan 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper introduces Kun, a novel approach to create high-quality instruction-tuning datasets for large language models (LLMs) without relying on manual annotations. It uses a self-training algorithm based on instruction back-translation and answer polishment to generate a substantial dataset of over a million Chinese instructional data points from diverse sources like Wudao, Wanjuan, and SkyPile. The approach deviates from traditional methods by using a self-curation process to refine and select the most effective instruction-output pairs. Experiments with the 6B-parameter Yi model across various benchmarks demonstrate Kun’s robustness and scalability. The method enhances data retention and clarity through algorithmic advancements and reduces reliance on costly manual annotations, presenting a scalable and efficient solution for improving LLMs’ instruction-following capabilities.
Low GrooveSquid.com (original content) Low Difficulty Summary
Kun is a new way to create teaching datasets for big language models (LLMs) without needing humans to label everything. It uses a special training method that looks at how well the model does when given instructions and tries to make it better by refining what it’s already learned. This approach works really well and can even use lots of different types of data from places like Wudao, Wanjuan, and SkyPile. The results are impressive, showing that Kun is robust and can work with big models like Yi. Overall, this method makes it easier to teach LLMs new things without needing a lot of human labor.

Keywords

* Artificial intelligence  * Instruction tuning  * Self training  * Translation