Loading Now

Summary of Deep-motion-net: Gnn-based Volumetric Organ Shape Reconstruction From Single-view 2d Projections, by Isuru Wijesinghe et al.


Deep-Motion-Net: GNN-based volumetric organ shape reconstruction from single-view 2D projections

by Isuru Wijesinghe, Michael Nix, Arezoo Zakeri, Alireza Hokmabadi, Bashar Al-Qaisieh, Ali Gooya, Zeike A. Taylor

First submitted to arxiv on: 9 Jul 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Deep-Motion-Net is an end-to-end graph neural network that enables 3D organ shape reconstruction from a single in-treatment kV planar X-ray image acquired at any arbitrary projection angle. The model learns the mesh regression from a patient-specific template and deep features extracted from kV images at arbitrary projection angles. It uses a combination of convolutional neural networks, feature pooling networks, and graph attention networks to deform the feature-encoded mesh. The model is trained using synthetically generated organ motion instances and corresponding kV images. It was tested on synthetic respiratory motion scenarios and in-treatment images acquired over full scan series for liver cancer patients.
Low GrooveSquid.com (original content) Low Difficulty Summary
Deep-Motion-Net is a new way to use X-ray images to figure out what organs are doing during radiation treatment. Right now, doctors have to rely on limited imaging methods that can be tricky to use. This model uses artificial intelligence to take X-ray images and create 3D pictures of organs in motion. It’s like taking a selfie from different angles and using AI to create a 3D version of your face! The goal is to make radiation treatment more accurate and effective.

Keywords

» Artificial intelligence  » Attention  » Graph neural network  » Regression