Loading Now

Summary of Selective Fine-tuning on Llm-labeled Data May Reduce Reliance on Human Annotation: a Case Study Using Schedule-of-event Table Detection, by Bhawesh Kumar et al.


Selective Fine-tuning on LLM-labeled Data May Reduce Reliance on Human Annotation: A Case Study Using Schedule-of-Event Table Detection

by Bhawesh Kumar, Jonathan Amar, Eric Yang, Nan Li, Yugang Jia

First submitted to arxiv on: 9 May 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Computation and Language (cs.CL)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
In this study, researchers aim to improve the performance of Large Language Models (LLMs) on specific healthcare applications by leveraging noisy labels generated from gemini-pro 1.0 for table classification tasks. They fine-tune PaLM-2 using parameter-efficient fine-tuning (PEFT) and introduce a filtering mechanism to select high-confidence labels, reducing noise in the auto-generated labels. The results show that fine-tuned PaLM-2 outperforms gemini-pro 1.0 and other LLMs on Schedule-of-Event (SoE) table detection, with performance comparable to using non-expert annotators.
Low GrooveSquid.com (original content) Low Difficulty Summary
This study shows how Large Language Models can be used in healthcare applications like detecting care plans in clinical trial protocols. Researchers use a model called PaLM-2 to help improve its performance by training it on noisy labels generated from gemini-pro 1.0. They also create a way to choose the most confident labels, making the process more efficient. The results show that this approach can be effective and potentially useful when expert annotations are hard to get or expensive.

Keywords

» Artificial intelligence  » Classification  » Fine tuning  » Gemini  » Palm  » Parameter efficient