Loading Now

Summary of Mediswift: Efficient Sparse Pre-trained Biomedical Language Models, by Vithursan Thangarasa et al.


MediSwift: Efficient Sparse Pre-trained Biomedical Language Models

by Vithursan Thangarasa, Mahmoud Salem, Shreyas Saxena, Kevin Leong, Joel Hestness, Sean Lie

First submitted to arxiv on: 1 Mar 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Large language models (LLMs) are trained on general data for various domains, but domain-specific LLMs have shown potential to outperform general-purpose models in specific tasks like biomedicine. While domain-specific pre-training enhances efficiency and leads to smaller models, training these LLMs remains computationally expensive, posing budgeting challenges. We introduce MediSwift, a suite of biomedical LMs that leverage sparse pre-training on domain-specific biomedical text data, achieving up to 2-2.5x reduction in training FLOPs by inducing up to 75% weight sparsity during pre-training. This was done on the Cerebras CS-2 system, which is designed to accelerate unstructured weight sparsity and enhance efficiency. MediSwift models outperform existing LLMs (up to 7B parameters) on biomedical tasks like PubMedQA through dense fine-tuning and strategic soft prompting, setting new benchmarks in terms of efficiency-accuracy.
Low GrooveSquid.com (original content) Low Difficulty Summary
Scientists have developed special language models for specific areas like medicine. These models are trained using less data than general-purpose models, which makes them more efficient. This is important because training big models can be expensive and take a lot of time. Our new model, MediSwift, uses this approach to create biomedical models that are smaller and faster. By reducing the amount of data needed for training, we’ve made it possible to train these models on special computers designed to handle this type of processing. The results show that our MediSwift models perform better than existing models on medical tasks like searching for information in scientific articles.

Keywords

* Artificial intelligence  * Fine tuning  * Prompting