Member-only story

View Synthesis and NeRF starter

Jehill Parikh
3 min readJan 25, 2023

--

Neural Radiance Fields (NeRF) is the SOTA model approach to obtain view of an scene, for this purpose a 5 dimension dataset which consists the pixel value at (x,y,z) along with view angles and direction θ, Φ as inputs to a neural network F to predict the colour density of the 3D representation. Applications can be use in across wide range of the field such as AR/VR, self driving data.

For this perform for each a continue colour density function (RGB) i.e. a “ray” is predicted which consist of the colour density (RGBσ) of the pixel as shown in picture a-c below. Initially the points of the ray1 which is just a line across the 3D scene, initially the at the time of training, the “black” dots represent the random value of these points on a particular ray, after training figures (b,c) each ray learns the colour and density of that shade across that “rays” in this case they are yellow, but it can be more complex palette. For this purpose the inputs are converted to “ray”

A simple explanation is that the neural network here is a “mapping” function a continue function across all possible rays in the scene and thus at predication time it is able to synthesis multiple view across the scene.

In the original paper to improved obtain good performance they authors employed

  1. Position embeddings to ensure neural networks not…

--

--

Jehill Parikh
Jehill Parikh

Written by Jehill Parikh

Neuroscientist | ML Practitioner | Physicist

No responses yet