Summary of Advancing Large Language Model Attribution Through Self-improving, by Lei Huang et al.
Advancing Large Language Model Attribution through Self-Improving
by Lei Huang, Xiaocheng Feng, Weitao Ma, Liang Zhao, Yuchun Fan, Weihong Zhong, Dongliang Xu, Qing Yang, Hongtao Liu, Bing Qin
First submitted to arxiv on: 17 Oct 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary A novel self-improvement framework for large language models (LLMs) is introduced to enhance their ability to generate text with citations to evidence sources, mitigating hallucinations and improving verifiability. The Self-Taught AttRibuTion (START) framework iteratively improves the attribution capability of LLMs without manual annotation. START first uses the model to self-construct synthetic training data for warming up, then utilizes fine-grained preference supervision signals constructed from its sampled responses to encourage robust, comprehensive, and attributable generation. Experimental results on three open-domain question-answering datasets demonstrate significant performance gains without relying on human annotations or advanced models. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Large language models are getting better at answering questions using information they find online. But how can we make sure the answers are correct and based on real evidence? One way is to teach these models to include citations to their sources, like a research paper would. However, this requires a lot of work and money. A new approach called START helps large language models learn to do this without needing so much help. It’s like giving the model a kind of training data that makes it better at finding the right information and citing its sources. When tested on three different question-answering tasks, START worked really well and didn’t need any human supervision. |
Keywords
» Artificial intelligence » Question answering