Loading Now

Summary of From Basic to Extra Features: Hypergraph Transformer Pretrain-then-finetuning For Balanced Clinical Predictions on Ehr, by Ran Xu et al.


From Basic to Extra Features: Hypergraph Transformer Pretrain-then-Finetuning for Balanced Clinical Predictions on EHR

by Ran Xu, Yiwen Lu, Chang Liu, Yong Chen, Yan Sun, Xiao Hu, Joyce C Ho, Carl Yang

First submitted to arxiv on: 9 Jun 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
A deep learning approach, HTP-Star, is proposed for modeling Electronic Health Records (EHRs), addressing the limitation of relying on massive features. This method leverages hypergraph structures and a pretrain-then-finetune framework, allowing seamless integration of additional features. Two techniques are designed to enhance model robustness during fine-tuning: Smoothness-inducing Regularization and Group-balanced Reweighting. Experiments on two real EHR datasets show that HTP-Star outperforms various baselines while balancing performance across patients with basic and extra features.
Low GrooveSquid.com (original content) Low Difficulty Summary
HTP-Star is a new way to use deep learning for Electronic Health Records (EHRs). Right now, most methods need lots of information about each patient. This makes it hard to include all the patients who don’t have that information. HTP-Star changes this by using special structures and a training process that works well with extra features. Two new techniques help make the model better at handling different types of data. The results show that HTP-Star is better than other methods, even when patients have different amounts of information.

Keywords

» Artificial intelligence  » Deep learning  » Fine tuning  » Regularization