Summary of Multi-modal Masked Siamese Network Improves Chest X-ray Representation Learning, by Saeed Shurrab et al.
Multi-modal Masked Siamese Network Improves Chest X-Ray Representation Learning
by Saeed Shurrab, Alejandro Guerra-Manzanares, Farah E. Shamout
First submitted to arxiv on: 5 Jul 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed method incorporates Electronic Health Records (EHR) data during self-supervised pretraining with a Masked Siamese Network (MSN) to enhance the quality of chest X-ray representations. The approach investigates three types of EHR data, including demographic, scan metadata, and inpatient stay information. Two vision transformer (ViT) backbones, ViT-Tiny and ViT-Small, are used for evaluation on three publicly available chest X-ray datasets: MIMIC-CXR, CheXpert, and NIH-14. The results demonstrate significant improvement compared to vanilla MSN and state-of-the-art self-supervised learning baselines in assessing the quality of representations via linear evaluation. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper proposes a new way to improve the quality of chest X-ray images using medical records. They add patient information from Electronic Health Records (EHR) during training, which helps create better image representations. They tested this method on three different datasets and found that it performed much better than other methods. This could be an important step in making medical imaging more accurate. |
Keywords
* Artificial intelligence * Pretraining * Self supervised * Siamese network * Vision transformer * Vit