Loading Now

Summary of Harmonising the Clinical Melody: Tuning Large Language Models For Hospital Course Summarisation in Clinical Coding, by Bokang Bi et al.


Harmonising the Clinical Melody: Tuning Large Language Models for Hospital Course Summarisation in Clinical Coding

by Bokang Bi, Leibo Liu, Sanja Lujic, Louisa Jorm, Oscar Perez-Concha

First submitted to arxiv on: 23 Sep 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This study addresses the challenge of summarizing clinical documentation in Electronic Medical Records systems, which is crucial for clinical coders. Large language models (LLMs) have been successful in shorter summarization tasks, but the task of summarizing a hospital course remains an open problem. The researchers adapted three pre-trained LLMs (Llama 3, BioMistral, and Mistral Instruct v0.1) for the hospital course summarization task using Quantized Low Rank Adaptation fine-tuning. They created a free-text clinical dataset from MIMIC III data by concatenating various clinical notes as input text, paired with ground truth Brief Hospital Course sections extracted from discharge summaries for model training. The fine-tuned models were evaluated using BERTScore and ROUGE metrics to assess their effectiveness in clinical domain fine-tuning. Additionally, the researchers validated their practical utility using a novel hospital course summary assessment metric specifically tailored for clinical coding. Their findings show that fine-tuning pre-trained LLMs for the clinical domain can significantly enhance their performance in hospital course summarization, suggesting their potential as assistive tools for clinical coding.
Low GrooveSquid.com (original content) Low Difficulty Summary
This study is about helping doctors and nurses summarize patient records. Right now, they have to read through a lot of information to get the important details. The researchers used special computer models to help with this task. They took three pre-trained models and changed them to work better for summarizing hospital notes. They used a big dataset of real patient records and tested the models to see how well they worked. The results show that these models can really help summarize hospital notes, which is important for getting the right information for medical coding.

Keywords

» Artificial intelligence  » Fine tuning  » Llama  » Low rank adaptation  » Rouge  » Summarization