Summary of Radiology Report Generation Via Multi-objective Preference Optimization, by Ting Xiao et al.
Radiology Report Generation via Multi-objective Preference Optimization
by Ting Xiao, Lei Shi, Peng Liu, Zhe Wang, Chenjia Bai
First submitted to arxiv on: 12 Dec 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper proposes a novel approach to automatic radiology report generation (RRG) using Multi-objective Preference Optimization (MPO). The existing RRG methods rely on supervised regression, which may not align optimally with radiologists’ heterogeneous preferences. The proposed MPO method uses multi-dimensional reward functions and multi-objective reinforcement learning to optimize the pre-trained RRG model to align with multiple human preferences. This is achieved by using a preference vector as a condition for the RRG model and optimizing a linearly weighed reward via RL. The model is trained on diverse preference vectors, allowing it to generate reports that cater to different preferences without further fine-tuning. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The new approach can help alleviate the workload of radiologists by generating reports that align with their individual preferences. The proposed method uses a pre-trained RRG model and optimizes it using multi-objective reinforcement learning to align with multiple human preferences. This allows the generated report to prioritize fluency, clinical accuracy, or other important factors. |
Keywords
» Artificial intelligence » Fine tuning » Optimization » Regression » Reinforcement learning » Supervised