NeurMips: Neural Mixture of Planar Experts
for View Synthesis

  • \(^1\) University of Illinois at Urbana-Champaign
  • \(^2\) National Taiwan University
  • \(^3\) Massachusetts Institute of Technology
\(^*\) Equal Contribution

CVPR 2022



We present Neural Mixtures of Planar Experts (NeurMiPs), a novel planar-based scene representation for modeling geometry and appearance. NeurMiPs leverages a collection of local planar experts in 3D space as the scene representation. Each planar expert consists of the parameters of the local rectangular shape representing geometry and a neural radiance field modeling the color and opacity. We render novel views by calculating ray-plane intersections and composite output colors and densities at intersected points to the image. NeurMiPs blends the efficiency of explicit mesh rendering and flexibility of the neural radiance field. Experiments demonstrate superior performance and speed of our proposed method, compared to other 3D representations in novel view synthesis.


Results: Tanks&Temple

The videos qualitatively compare NeurMips with KiloNeRF.

Results: Replica

The videos qualitatively compare NeurMips with NeRF (PyTorch3D implementation) and NeX, which are volume rendering method and multi-plane image (MPI) respectively.

Related links

The NeurMips is implemented on top of the codebase of NeRF (Pytorch3D implementation).



We thank National Center for High-performance Computing (NCHC) for providing computational and storage resources. We also thank NVIDIA for hardware donation.
The website template was borrowed from Michaƫl Gharbi.
WCM is partially funded by the MIT-IBM Watson AI Lab.