Research Article
Open access
Published on 15 March 2024
Download pdf
Liu,X. (2024). An overview of Neural Radiance Fields. Applied and Computational Engineering,45,1-6.
Export citation

An overview of Neural Radiance Fields

Xiaoju Liu *,1,
  • 1 School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, 611730, China

* Author to whom correspondence should be addressed.

https://doi.org/10.54254/2755-2721/45/20241016

Abstract

Synthesizing controllable, photo-realistic images and videos is one of the fundamental goals of computer graphics. Neural rendering is a rapidly emerging field in image synthesis that allows a compact representation of scenes, and by utilizing neural networks, rendering can be learned from existing observations. Neural Radiance Fields (NeRF) implement an effective combination of Neural Fields and the graphics component Volume rendering. It achieves the first photo-level view synthesis effect using an implicit representation. Unlike previous approaches, NeRF chooses Volume as an intermediate representation to reconstruct an implicit Volume. Although the advantages of NeRF are apparent, there are many drawbacks in the original version of NeRF: it is slow to train and render, requires a large number of perspectives, can only represent static scenes, and the trained NeRF representation does not generalize to other scenes. This report focuses on optimizing the shortcomings mentioned above of NeRF by scholars in the last three years and analyzes the solutions to the problems of NeRF from several perspectives.

Keywords

Neural Rendering, Neural Radiance Fields, Deep Learning, 3D Reconstruction

[1]. Lin CH, Kong C, Lucey S. Learning efficient point cloud generation for dense 3d object reconstruction. InProceedings of the AAAI Conf. on Artificial Intelligence 2018 Apr 27 (Vol. 32, No. 1).

[2]. Mi Q, Gao T. 3D reconstruction based on the depth image: A review. InInnovative Mobile and Internet Services in Ubiquitous Computing: Proceedings of the 16th Int. Conf. on Innovative Mobile and Internet Services in Ubiquitous Computing (IMIS-2022) 2022 Jun 16 (pp. 172-183). Cham: Springer International Publishing.

[3]. Xie Y, Takikawa T, Saito S, Litany O, Yan S, Khan N, Tombari F, Tompkin J, Sitzmann V, Sridhar S. Neural fields in visual computing and beyond. InComputer Graphics Forum 2022 May (Vol. 41, No. 2, pp. 641-676).

[4]. Drebin RA, Carpenter L, Hanrahan P. Volume rendering. ACM Siggraph Computer Graphics. 1988 Jun 1;22(4):65-74.

[5]. Tewari A, Fried O, Thies J, Sitzmann V, Lombardi S, Sunkavalli K, Martin‐Brualla R, Simon T, Saragih J, Nießner M, Pandey R. State of the art on neural rendering. InComputer Graphics Forum 2020 May (Vol. 39, No. 2, pp. 701-727).

[6]. Mildenhall B, Srinivasan PP, Tancik M, Barron JT, Ramamoorthi R, Ng R. Nerf: Representing scenes as neural radiance fields for view synthesis. Communications of the ACM. 2021 Dec 17;65(1):99-106.

[7]. Garbin SJ, Kowalski M, Johnson M, Shotton J, Valentin J. Fastnerf: High-fidelity neural rendering at 200fps. InProceedings of the IEEE/CVF Int. Conf. on Computer Vision 2021 (pp. 14346-14355).

[8]. Barron JT, Mildenhall B, Tancik M, Hedman P, Martin-Brualla R, Srinivasan PP. Mip-nerf: A multiscale representation for anti-aliasing neural radiance fields. InProceedings of the IEEE/CVF Int. Conf. on Computer Vision 2021 (pp. 5855-5864).

[9]. Deng K, Liu A, Zhu JY, Ramanan D. Depth-supervised nerf: Fewer views and faster training for free. InProceedings of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition 2022 (pp. 12882-12891).

[10]. Yu A, Ye V, Tancik M, Kanazawa A. pixelnerf: Neural radiance fields from one or few images. InProceedings of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition 2021 (pp. 4578-4587).

[11]. Chen A, Xu Z, Zhao F, Zhang X, Xiang F, Yu J, Su H. Mvsnerf: Fast generalizable radiance field reconstruction from multi-view stereo. InProceedings of the IEEE/CVF Int. Conf. on Computer Vision 2021 (pp. 14124-14133).

[12]. Zhang X, Bi S, Sunkavalli K, Su H, Xu Z. Nerfusion: Fusing radiance fields for large-scale scene reconstruction. InProceedings of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition 2022 (pp. 5449-5458).

[13]. Tancik M, Casser V, Yan X, Pradhan S, Mildenhall B, Srinivasan PP, Barron JT, Kretzschmar H. Block-nerf: Scalable large scene neural view synthesis. InProceedings of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition 2022 (pp. 8248-8258).

[14]. Trevithick A, Yang B. Grf: Learning a general radiance field for 3d representation and rendering. InProceedings of the IEEE/CVF Int. Conf. on Computer Vision 2021 (pp. 15182-15192)..

[15]. Wang Q, Wang Z, Genova K, Srinivasan PP, Zhou H, Barron JT, Martin-Brualla R, Snavely N, Funkhouser T. Ibrnet: Learning multi-view image-based rendering. InProceedings of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition 2021 (pp. 4690-4699).

[16]. Pumarola A, Corona E, Pons-Moll G, Moreno-Noguer F. D-nerf: Neural radiance fields for dynamic scenes. InProceedings of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition 2021 (pp. 10318-10327).

[17]. Wang C, Chai M, He M, Chen D, Liao J. Clip-nerf: Text-and-image driven manipulation of neural radiance fields. InProceedings of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition 2022 (pp. 3835-3844).

Cite this article

Liu,X. (2024). An overview of Neural Radiance Fields. Applied and Computational Engineering,45,1-6.

Data availability

The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.

Disclaimer/Publisher's Note

The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

About volume

Volume title: Proceedings of the 4th International Conference on Signal Processing and Machine Learning

Conference website: https://www.confspml.org/
ISBN:978-1-83558-331-9(Print) / 978-1-83558-332-6(Online)
Conference date: 15 January 2024
Editor:Marwan Omar
Series: Applied and Computational Engineering
Volume number: Vol.45
ISSN:2755-2721(Print) / 2755-273X(Online)

© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license. Authors who publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See Open access policy for details).