Loading Now

Summary of Blockfusion: Expandable 3d Scene Generation Using Latent Tri-plane Extrapolation, by Zhennan Wu et al.


BlockFusion: Expandable 3D Scene Generation using Latent Tri-plane Extrapolation

by Zhennan Wu, Yang Li, Han Yan, Taizhang Shang, Weixuan Sun, Senbo Wang, Ruikai Cui, Weizhe Liu, Hiroyuki Sato, Hongdong Li, Pan Ji

First submitted to arxiv on: 30 Jan 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI); Graphics (cs.GR)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
BlockFusion, a diffusion-based model, generates 3D scenes as unit blocks, allowing for seamless extension of the scene. The model is trained on datasets of 3D blocks cropped from complete meshes, converting each block into hybrid neural fields featuring tri-planes with geometry and signed distance values. A variational auto-encoder compresses these tri-planes into a latent space, enabling denoising diffusion to generate high-quality, diverse scenes. To expand a scene, empty blocks are appended, extrapolating existing latent tri-planes for semantically meaningful transitions. A 2D layout conditioning mechanism controls placement and arrangement of scene elements. Experimental results demonstrate BlockFusion’s ability to generate large, geometrically consistent, and high-quality 3D scenes in indoor and outdoor scenarios.
Low GrooveSquid.com (original content) Low Difficulty Summary
BlockFusion is a new way to make 3D scenes using blocks that can be added together. The model takes in lots of 3D block data and uses it to create hybrid neural fields that describe the shape and distance of each block. This information is then used to generate new, high-quality 3D scenes that are diverse and geometrically correct. To make a scene bigger, you just add more blocks and extrapolate the existing information. The model also allows for control over where objects in the scene are placed. Overall, BlockFusion can create large, realistic, and detailed 3D scenes.

Keywords

» Artificial intelligence  » Diffusion  » Encoder  » Latent space