Loading Now

Summary of On Optimal Sampling For Learning Sdf Using Mlps Equipped with Positional Encoding, by Guying Lin et al.


On Optimal Sampling for Learning SDF Using MLPs Equipped with Positional Encoding

by Guying Lin, Lei Yang, Yuan Liu, Congyi Zhang, Junhui Hou, Xiaogang Jin, Taku Komura, John Keyser, Wenping Wang

First submitted to arxiv on: 2 Jan 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Graphics (cs.GR); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper investigates neural implicit fields, specifically the neural signed distance field (SDF) of a shape, which have numerous applications in areas such as 3D shape encoding and collision detection. While Multi-layer Perceptrons (MLP) with positional encoding (PE) can effectively capture high-frequency geometric details, they often produce noisy artifacts in the learned implicit fields. The authors aim to explain this issue through Fourier analysis and propose a method to estimate the intrinsic frequency of the network based on randomized weights. By sampling against this intrinsic frequency, the recommended training sampling rate is determined, leading to accurate fitting results. The proposed strategy outperforms existing methods.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper helps us understand how neural networks can learn shapes in 3D space. Right now, these networks are great at capturing small details about shapes, but they sometimes make mistakes and show weird artifacts. The researchers wanted to figure out why this happens and found that it’s because the networks are trying to capture too many tiny details. They came up with a new way to teach the networks that lets them learn accurate shape representations without these mistakes. This is an important step forward in using neural networks for tasks like computer vision and robotics.

Keywords

* Artificial intelligence  * Positional encoding