Loading Now

Summary of Large Language Multimodal Models For 5-year Chronic Disease Cohort Prediction Using Ehr Data, by Jun-en Ding et al.


Large Language Multimodal Models for 5-Year Chronic Disease Cohort Prediction Using EHR Data

by Jun-En Ding, Phan Nguyen Minh Thao, Wen-Chih Peng, Jian-Zhe Wang, Chun-Cheng Chug, Min-Chen Hsieh, Yun-Chien Tseng, Ling Chen, Dongsheng Luo, Chi-Te Wang, Pei-fu Chen, Feng Liu, Fang-Ming Hung

First submitted to arxiv on: 2 Mar 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed Large Language Multimodal Models (LLMMs) framework combines multimodal data from clinical notes and laboratory test results to predict chronic disease risk. The model incorporates a text embedding encoder, multi-head attention layer, and deep neural network module to merge blood features with chronic disease semantics into a latent space. Experiments show that combining clinicalBERT and PubMed-BERT with attention fusion achieves an accuracy of 73% in multiclass chronic diseases and diabetes prediction. Additionally, the Flan T-5 model can achieve a 76% Area Under the ROC Curve (AUROC) by transforming laboratory test values into textual descriptions.
Low GrooveSquid.com (original content) Low Difficulty Summary
This study uses electronic health records to train language models that predict chronic disease risk. The researchers collect data from Taiwan hospitals and use it to develop a new framework for combining clinical notes and laboratory test results. The model is tested on predicting diabetes, and the results show that it can accurately identify patients at risk of developing the condition. This approach could help doctors diagnose and treat diseases earlier.

Keywords

» Artificial intelligence  » Attention  » Bert  » Embedding  » Encoder  » Latent space  » Multi head attention  » Neural network  » Roc curve  » Semantics