Loading Now

Summary of Understanding Finetuning For Factual Knowledge Extraction, by Gaurav Ghosal et al.


Understanding Finetuning for Factual Knowledge Extraction

by Gaurav Ghosal, Tatsunori Hashimoto, Aditi Raghunathan

First submitted to arxiv on: 20 Jun 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This research paper explores the impact of using lesser-known facts during fine-tuning on the factuality of question answering models. The study shows that fine-tuning on poorly stored pretraining data can lead to a significant drop in factuality, even when all facts are seen during training. The authors prove this phenomenon theoretically and experimentally, demonstrating that fine-tuning on lesser-known facts can cause models to ignore subject entity names and produce generic plausible responses. On three question answering benchmarks (PopQA, Entity Questions, and MMLU) and two language models (Llama-2-7B and Mistral-7B), the study finds that fine-tuning on a subset of better-known examples matches or outperforms fine-tuning on the entire dataset. The results shed light on the interaction between pretraining knowledge and finetuning data, highlighting the importance of considering how facts are stored in the pretrained model for knowledge-intensive tasks.
Low GrooveSquid.com (original content) Low Difficulty Summary
This study looks at what happens when you use different types of information to train question answering models. They found that using lesser-known facts can actually make the models less accurate. This is because the models might ignore important details and instead give generic answers. The researchers tested this idea on three different sets of questions (PopQA, Entity Questions, and MMLU) and two language models (Llama-2-7B and Mistral-7B). They found that using better-known facts can actually help improve the accuracy of the models.

Keywords

» Artificial intelligence  » Fine tuning  » Llama  » Pretraining  » Question answering