Article Based on NeRF Research
Introduction
Neural Radiance Fields proposed a state-of-the-art method that constructs a 3D model with 2D images and their corresponding camera positions. It is focused on generating novel views based on the initial estimated camera points on the point cloud and generating new views from arbitrary camera poses.
How NeRF Works?
NeRF algorithm represents a scene using a fully-connected (non-convolutional) deep network, whose input is a single continuous 5D coordinate (spatial location (x, y, z) and viewing direction (θ, φ)) and whose output is the volume density and view-dependent emitted radiance at that spatial location It synthesises views by querying 5D coordinates along camera rays and uses classic volume rendering techniques to project the output colours and densities into an image. Because volume rendering is naturally differentiable, the only input required to optimise the representation is a set of images with known camera poses.
Synthetic Results
For Visual Representation : https://www.matthewtancik.com/nerf
Research Papers Based on NeRF
? Instant NeRF by Nvidia
? DBlock-NeRF by Waymo
? Neural 3D Reconstruction in the Wild
? NSVF
? Plenoxels
? Instant NGP
? NSFF (3D Video Stablization)
? D2 NeRF (Object Removal)
? Stylized NeRF
? Artistic Radiance Fields
? Neural 3D Reconstruction in the Wild
? HumanNeRF
? BANMo