Loading Now

Summary of Memory-scalable and Simplified Functional Map Learning, by Robin Magnet et al.


Memory-Scalable and Simplified Functional Map Learning

by Robin Magnet, Maks Ovsjanikov

First submitted to arxiv on: 30 Mar 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper proposes a novel memory-scalable and efficient functional map learning pipeline for non-rigid shape matching problems. By promoting consistency between functional and pointwise maps, the authors demonstrate significant improvements in accuracy over early methods. The approach leverages the specific structure of functional maps to avoid storing large dense matrices, making it more efficient and scalable. Additionally, a differentiable map refinement layer is introduced, which can be used at train time to enforce consistency between refined and initial versions of the map. This results in a simpler, more efficient, and numerically stable approach that achieves close to state-of-the-art results in challenging scenarios.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper helps us match shapes better! It creates a new way to learn about shapes that is faster and uses less memory. Before, people used big matrices to do this, but now we can do it without those matrices. This makes it more efficient and allows us to work with bigger problems. The authors also make an improvement to the refining process so that it’s easier to use. This new way works well and is close to the best results we’ve seen.

Keywords

» Artificial intelligence