Summary of Training a High-performance Retinal Foundation Model with Half-the-data and 400 Times Less Compute, by Justin Engelmann et al.
Training a high-performance retinal foundation model with half-the-data and 400 times less compute
by Justin Engelmann, Miguel O. Bernabeu
First submitted to arxiv on: 30 Apr 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary A recent breakthrough in artificial intelligence (AI) in medicine proposes a novel approach to training retinal foundation models, which can be adapted to downstream tasks with small datasets. Researchers at Moorfields Eye Hospital developed RETFound-MEH, a model trained on 900,000 images, including private hospital data. This led to the creation of DERETFound, which achieved comparable performance using only 150,000 publicly available images. However, both models required significant resources and computational power for training and use. To address this limitation, we propose a novel Token Reconstruction objective that enables RETFound-Green, a retinal foundation model trained on 75,000 publicly available images with significantly reduced computational requirements. Our results show that RETFound-Green performs equally well or better than the previous models on various tasks using datasets from Brazil, India, and China. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Scientists are working on new ways to improve artificial intelligence (AI) in medicine. They’ve created special models called “foundation models” that can learn from lots of images and then be used for other tasks with less data. The problem is that these models need a lot of computing power and data, which makes them expensive and bad for the environment. A team at Moorfields Eye Hospital developed a new approach to train these models using only 75,000 images, which is much more efficient. They tested this model on different tasks and found it performed just as well or better than other models that used more resources. This could lead to more affordable and sustainable AI solutions for medicine. |
Keywords
* Artificial intelligence * Token