Summary of The Limited Impact Of Medical Adaptation Of Large Language and Vision-language Models, by Daniel P. Jeong et al.
The Limited Impact of Medical Adaptation of Large Language and Vision-Language Models
by Daniel P. Jeong, Pranav Mani, Saurabh Garg, Zachary C. Lipton, Michael Oberst
First submitted to arxiv on: 13 Nov 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper investigates the effectiveness of adapting large language models (LLMs) and vision-language models (VLMs) for medical applications through continued pretraining on biomedical corpora. The authors compare ten “medical” LLMs and two VLMs against their base models, finding that most medical LLMs and all medical VLMs fail to consistently improve over their base models in zero-shot, few-shot, and supervised fine-tuning settings for medical question answering (QA). For instance, on clinical-note-based QA tasks in the 3-shot setting, only 26.7% of medical LLMs outperformed their base models, with a significant portion performing worse or tied to their base models. The study’s conclusions are based on direct model comparisons, optimized prompts for each model, and statistical uncertainty considerations. The findings suggest that state-of-the-art general-domain models may already possess strong medical knowledge and reasoning capabilities, offering recommendations for strengthening future studies. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper looks at how well big language models can be used in medicine. Some people have been trying to make these models better by training them on lots of medical information. The authors tested ten “medical” language models and two vision-language models against their original versions, and found that most of the medical models didn’t do any better than the originals. In fact, some even did worse! They think this might be because the general-purpose models are already really good at understanding medical concepts. |
Keywords
» Artificial intelligence » Few shot » Fine tuning » Pretraining » Question answering » Supervised » Zero shot