Summary of Suicide Phenotyping From Clinical Notes in Safety-net Psychiatric Hospital Using Multi-label Classification with Pre-trained Language Models, by Zehan Li et al.
Suicide Phenotyping from Clinical Notes in Safety-Net Psychiatric Hospital Using Multi-Label Classification with Pre-Trained Language Models
by Zehan Li, Yan Hu, Scott Lane, Salih Selek, Lokesh Shahani, Rodrigo Machado-Vieira, Jair Soares, Hua Xu, Hongfang Liu, Ming Huang
First submitted to arxiv on: 27 Sep 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI); Computers and Society (cs.CY); Information Retrieval (cs.IR)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary A pre-trained language model can be used to identify suicidal events from unstructured clinical narratives, which can improve the quality of care in psychiatric settings. Researchers evaluated four BERT-based models using two fine-tuning strategies for detecting coexisting suicidal events from 500 annotated psychiatric evaluation notes. The models were trained on different subsets of the data and tested against each other. RoBERTa outperformed the other models with an accuracy of 0.86 and F1 score of 0.78, followed by MentalBERT (0.83/0.74) and BioClinicalBERT (0.82/0.72). The findings suggest that optimizing the model, using domain-relevant data, and fine-tuning with a single multi-label classification strategy can improve the performance of suicide phenotyping models. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Suicide can be prevented by identifying suicidal events in psychiatric settings. Researchers used language models to analyze clinical notes and find suicidal events. They tested four different models on 500 notes and found that one model, RoBERTa, was the best at finding coexisting suicidal events. The researchers hope that their work will help improve care for people with mental health issues. |
Keywords
» Artificial intelligence » Bert » Classification » F1 score » Fine tuning » Language model