Loading Now

Summary of Prompting Large Language Models For Zero-shot Clinical Prediction with Structured Longitudinal Electronic Health Record Data, by Yinghao Zhu et al.


Prompting Large Language Models for Zero-Shot Clinical Prediction with Structured Longitudinal Electronic Health Record Data

by Yinghao Zhu, Zixiang Wang, Junyi Gao, Yuning Tong, Jingkun An, Weibin Liao, Ewen M. Harrison, Liantao Ma, Chengwei Pan

First submitted to arxiv on: 25 Jan 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This research explores the potential of Large Language Models (LLMs) like GPT-4 to process structured longitudinal Electronic Health Records (EHR) data, which is particularly relevant in scenarios where traditional predictive models struggle due to a lack of historical data. The authors investigate the zero-shot capabilities of LLMs and design a prompting approach that takes into account specific EHR characteristics and employs an in-context learning strategy aligned with clinical contexts. The results show that LLMs can improve prediction performance by about 35% in key tasks such as mortality, length-of-stay, and 30-day readmission on the MIMIC-IV and TJH datasets, surpassing ML models in few-shot settings.
Low GrooveSquid.com (original content) Low Difficulty Summary
This study looks at how computers can understand and work with very detailed patient records. These records are important for making quick decisions when new diseases emerge. The authors used special computer models called Large Language Models to see if they could make good predictions from these records. They made the models “understand” what the records meant by giving them clues about specific parts of the records and training them in a way that makes sense for doctors. This helped the models do a much better job than usual, making accurate predictions about things like how long patients would stay in the hospital or whether they would need to come back 30 days later.

Keywords

* Artificial intelligence  * Few shot  * Gpt  * Prompting  * Zero shot