Loading Now

Summary of Prompting Large Language Models For Clinical Temporal Relation Extraction, by Jianping He et al.


Prompting Large Language Models for Clinical Temporal Relation Extraction

by Jianping He, Laila Rasmy, Haifang Li, Jianfu Li, Zenan Sun, Evan Yu, Degui Zhi, Cui Tao

First submitted to arxiv on: 4 Dec 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper aims to improve large language models (LLMs) for clinical temporal relation extraction (CTRE) in both few-shot and fully supervised settings. The study utilizes four LLMs, including Encoder-based GatorTron-Base and Large, as well as Decoder-based LLaMA3-8B and MeLLaMA-13B. Four fine-tuning strategies are explored for GatorTron-Base, while GatorTron-Large is assessed using two parameter-efficient fine-tuning strategies. The results show that Hard-Prompting with Unfrozen GatorTron-Base achieves the highest F1 score under fully supervised settings, surpassing the state-of-the-art model by 3.74%. Additionally, variants of QLoRA adapted to GatorTron-Large and Standard Fine-Tuning of GatorTron-Base exceed the state-of-the-art model in this setting. The findings highlight the importance of selecting appropriate models and fine-tuning strategies based on task requirements and data availability.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about improving computers that can understand medical records. It uses special language models to help them learn how to extract important information from these records. The study tries different ways to teach the computers, and finds one method works really well for extracting temporal relationships (like when a patient got sick). This could help doctors make better decisions and improve patient care.

Keywords

» Artificial intelligence  » Decoder  » Encoder  » F1 score  » Few shot  » Fine tuning  » Parameter efficient  » Prompting  » Supervised