View synthesis is a computer vision (CV) technique that uses observed images to recover a 3D scene representation that can render the scene from novel unobserved viewpoints. Recently, it has seen significant progress resulting from using neural volumetric representations.

Neural Radiance Fields (NeRF) can render photorealistic novel views using fine geometric details and realistic view-dependent appearance. It represents a scene as a continuous volumetric function, parameterized by a multilayer perceptron (MLP) that maps from a steady 3D position to the volume density and view-dependent emitted radiance at that location.

However, rendering NeRF is slow and computationally heavy, limiting its use for interactive view synthesis. It also makes it impossible to display a recovered 3D model in a standard web browser. 

Summary: https://www.marktechpost.com/2021/04/03/researchers-at-google-introduces-an-efficient-neural-volumetric-representation-that-enables-real-time-view-synthesis/

Paper: https://arxiv.org/pdf/2103.14645.pdf



Source link