Summary of Evaluating Reliability in Medical Dnns: a Critical Analysis Of Feature and Confidence-based Ood Detection, by Harry Anthony et al.
Evaluating Reliability in Medical DNNs: A Critical Analysis of Feature and Confidence-Based OOD Detection
by Harry Anthony, Konstantinos Kamnitsas
First submitted to arxiv on: 30 Aug 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper proposes a novel approach to out-of-distribution (OOD) detection in deep neural networks (DNNs) for medical image analysis. The authors create two new OOD benchmarks by dividing dermatology and ultrasound datasets into subsets containing or without artefacts, training models with artefact-free images, and assessing the impact of artefacts on predictions. They show that OOD artefacts can boost a model’s softmax confidence in its predictions, contradicting the assumption that OOD artefacts should lead to more uncertain outputs. The authors argue that a combination of feature-based and confidence-based methods should be used within DNN pipelines to mitigate their respective weaknesses. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The paper talks about how doctors use special computer programs to look at pictures of skin or inside people’s bodies. These programs can make mistakes if the pictures are weird or different from what they were trained on. The authors want to help these programs not make as many mistakes by figuring out when a picture is weird and needs extra checking. They made some new tests for this problem and found that sometimes the program thinks it knows what’s in a picture even when it’s wrong, just because the picture looks a little different. This is important because doctors need to be sure they’re looking at real pictures, not fake ones. |
Keywords
* Artificial intelligence * Softmax