Loading Now

Summary of Sudo: a Framework For Evaluating Clinical Artificial Intelligence Systems Without Ground-truth Annotations, by Dani Kiyasseh et al.


SUDO: a framework for evaluating clinical artificial intelligence systems without ground-truth annotations

by Dani Kiyasseh, Aaron Cohen, Chengsheng Jiang, Nicholas Altieri

First submitted to arxiv on: 2 Jan 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Computers and Society (cs.CY)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
A clinical AI system is typically validated on held-out data not exposed to it before, mimicking its real-world deployment. However, when unseen wild data differ from the held-out set, a distribution shift occurs, making it unclear how much trust can be placed in AI-based findings. To address this, we introduce SUDO, a framework for evaluating AI systems without ground-truth annotations. SUDO assigns temporary labels to wild data points and uses them to train models, with the best-performing model indicating the most likely label. We demonstrate SUDO’s reliability through experiments on dermatology images, histopathology patches, and clinical reports, showing it can identify unreliable predictions and assess algorithmic bias for wild data without annotations. This improves research findings’ integrity and enables ethical AI system deployment in medicine.
Low GrooveSquid.com (original content) Low Difficulty Summary
AI systems are tested on held-out data to mimic real-world use, but when this data is different from what the system has seen before, it’s unclear how much trust can be placed in the results. To fix this, scientists have created a new way to evaluate AI without knowing the right answers. This method uses temporary labels and trains multiple models on the same data. The best-performing model suggests the most likely correct answer. Researchers tested this method with images of skin conditions, medical slides, and patient reports, showing it can spot incorrect predictions and help make sure AI systems are fair.

Keywords

* Artificial intelligence