Summary of Wispermed at Biolaysumm: Adapting Autoregressive Large Language Models For Lay Summarization Of Scientific Articles, by Tabea M. G. Pakull et al.
WisPerMed at BioLaySumm: Adapting Autoregressive Large Language Models for Lay Summarization of Scientific Articles
by Tabea M. G. Pakull, Hendrik Damm, Ahmad Idrissi-Yaghir, Henning Schäfer, Peter A. Horn, Christoph M. Friedrich
First submitted to arxiv on: 20 May 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper presents a significant breakthrough in making complex scientific publications accessible to non-experts through automatic lay summarization. The WisPerMed team fine-tuned large language models (LLMs) such as BioMistral and Llama3 to create concise summaries from biomedical texts. To enhance performance, they employed various techniques like instruction tuning, few-shot learning, and prompt variations that incorporate specific context information. Results show that fine-tuning generally yields the best results across most evaluated metrics. Few-shot learning notably improves models’ ability to generate relevant and accurate texts, especially with well-crafted prompts. The team also developed a Dynamic Expert Selection (DES) mechanism to optimize text output selection based on readability and factuality metrics. Notably, the WisPerMed team achieved 4th place among 54 participants, outperforming the baseline by approximately 5.5 percentage points. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper is about making science easier for everyone to understand. The authors used special computer models to create simple versions of complex scientific texts. They tested different ways to make these models better, like giving them instructions and using a specific way of learning from examples. The results show that this approach can make the summaries more accurate and easy to read. The team even developed a new system to choose the best summary based on how readable and accurate it is. |
Keywords
» Artificial intelligence » Few shot » Fine tuning » Instruction tuning » Prompt » Summarization