Loading Now

Summary of Weakly-supervised Medical Image Segmentation with Gaze Annotations, by Yuan Zhong et al.


Weakly-supervised Medical Image Segmentation with Gaze Annotations

by Yuan Zhong, Chenhui Tang, Yumeng Yang, Ruoxi Qi, Kang Zhou, Yuqi Gong, Pheng Ann Heng, Janet H. Hsiao, Qi Dou

First submitted to arxiv on: 10 Jul 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper proposes a novel approach to medical image segmentation by leveraging eye gaze as an efficient annotation method. The authors develop a multi-level framework that trains multiple networks using discriminative human attention, simulated from pseudo-masks derived from gaze heatmaps. To mitigate gaze noise, the model exploits cross-level consistency to regularize overfitting noisy labels. The proposed method is validated on two public medical datasets for polyp and prostate segmentation tasks. The authors contribute a new high-quality gaze dataset, GazeMedSeg, and demonstrate that gaze annotation outperforms previous label-efficient schemes in terms of both performance and annotation time. This work has implications for reducing the cost and time required for annotating medical images.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper uses eye movement to help computers better understand medical images. Usually, people have to spend a lot of time marking up these images so that computers can learn from them. The authors propose a new way to do this using eye movements. They train multiple computer models to work together and use fake labels created from eye movement data. This approach is tested on two different types of medical image segmentation tasks and shows better results than previous methods. The paper also provides a new dataset of eye movement data that can be used by other researchers.

Keywords

» Artificial intelligence  » Attention  » Image segmentation  » Overfitting