Summary of Large Language Models For Biomedical Text Simplification: Promising but Not There Yet, by Zihao Li et al.
Large Language Models for Biomedical Text Simplification: Promising But Not There Yet
by Zihao Li, Samuel Belkadi, Nicolo Micheletti, Lifeng Han, Matthew Shardlow, Goran Nenadic
First submitted to arxiv on: 7 Aug 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper presents a report on the system developed for participation in the PLABA2023 task on biomedical abstract simplification, part of the TAC 2023 tracks. The team submitted three categories of model outputs: domain fine-tuned T5-like models including Biomedical-T5 and Lay-SciFive; fine-tuned BARTLarge model with controllable attributes via tokens (BART-w-CTs); and ChatGPT-prompting. The paper also details the work done for BioGPT finetuning. In automatic evaluations using SARI scores, BeeManc ranked 2nd among all teams, while LaySciFive ranked 3rd among all evaluated systems. In human evaluations, BART-w-CTs ranked 2nd on Sentence-Simplicity and Term-Simplicity, while ChatGPT-prompting ranked high in simplified term accuracy score, completeness score, and faithfulness score. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The paper is about a team that worked together to simplify biomedical abstracts. They used special models like Biomedical-T5 and Lay-SciFive to make the text easier to understand. The team also tested other ways of simplifying the text, like using prompts for ChatGPT. The results show that their methods were quite good, with some of them even ranking in the top two. This is important because making biomedical abstracts simpler can help people who aren’t experts in the field understand and learn from the research more easily. |
Keywords
» Artificial intelligence » Prompting » T5