Loading Now

Summary of Enhanced Fine-tuning Of Lightweight Domain-specific Q&a Model Based on Large Language Models, by Shenglin Zhang et al.


Enhanced Fine-Tuning of Lightweight Domain-Specific Q&A Model Based on Large Language Models

by Shenglin Zhang, Pengtian Zhu, Minghua Ma, Jiagang Wang, Yongqian Sun, Dongwen Li, Jingyu Wang, Qianying Guo, Xiaolei Hua, Lin Zhu, Dan Pei

First submitted to arxiv on: 22 Aug 2024

Categories

  • Main: Artificial Intelligence (cs.AI)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed novel framework, Self-Evolution, is designed to address the challenges of fine-tuning large language models (LLMs) in specialized domains. It leverages lightweight open-source LLMs through multiple iterative fine-tuning rounds, employing a strategy that filters and reinforces valuable knowledge during the process. The framework achieves a performance score 174% higher on domain-specific question-answering evaluations than Qwen1.5-7B-Chat and even 22% higher than Qwen1.5-72B-Chat. Self-Evolution has been deployed in China Mobile’s daily operation and maintenance for 117 days, improving efficiency by an average of over 18.6%. The framework code is released publicly.
Low GrooveSquid.com (original content) Low Difficulty Summary
Large language models are great at answering general questions, but struggle when it comes to specialized domains because they don’t have enough domain-specific knowledge. Companies that want to use these models face two big challenges: keeping customer information private and not spending too much money on the process. This paper proposes a new way of fine-tuning these models called Self-Evolution. It uses smaller, open-source models and does many tiny fine-tunings in a row to get better results. The strategy it uses is designed to focus on the most important knowledge during this process. In tests, Self-Evolution did much better than other models at answering domain-specific questions. It’s been used in a company’s daily operations for over three months and has improved efficiency by an average of 18.6%.

Keywords

» Artificial intelligence  » Fine tuning  » Question answering