Research Article
Open access
Published on 23 October 2023
Download pdf
Wei,Y. (2023). Point cloud densification via symmetry. Applied and Computational Engineering,13,13-20.
Export citation

Point cloud densification via symmetry

Yu Wei *,1,
  • 1 Harbin Institute of Technology

* Author to whom correspondence should be addressed.

https://doi.org/10.54254/2755-2721/13/20230703

Abstract

Three-dimensional objects are usually represented by point cloud based on lidar reflection of the sensors. However, the point clouds are commonly sparse since the lidar reflection is restricted by the location where the machine scans the objects around it. Commonly, the point cloud data that we use in autonomous driving are basically concentrated on one side or two sides and can seldom depict the whole image of all the objects. In this paper, we propose methods based on symmetry to diminish this intrinsic problem. Our work uses CenterPoint as our backbone and we do some fine-tune on it to make the data augmented. Moreover, we use FutureDet as the detector and the predictor to see whether the results of the methods fit the design. We obtain the information of CenterPoint from the detector and use the position information of the center point to do symmetry so as to make the points exist on the other side. We are trying to address the issue by comparing the results of metrics from the FutureDet and the fine-tune model on nuScenes dataset.

Keywords

computer vision, 3D detection, point cloud, data augmentation

[1]. Mayank Bansal, Alex Krizhevsky, and Abhijit Ogale. Chauffeurnet: Learning to drive by imitating the best and synthesizing the worst. RSS, 2019.

[2]. Suorong Yang, Weikang Xiao, Mengcheng Zhang, Suhan Guo, Jian Zhao and Furao Shen. Image Data Augmentation for Deep Learning: A Survey. CVPR, 2022. arXiv:2204.08610

[3]. A survey on Image Data Augmentation for Deep Learning. J Big Data 6, 60 (2019). https://doi.org/10.1186/s40537-019-0197-0

[4]. Neehar Peri, Jonathon Luiten, Mengtian Li, Aljoˇsa Oˇsep, Laura Leal-Taix´e, Deva Ramanan. Forecasting from LiDAR via Future Object Detection. CVPR, 2022.

[5]. Hsu-kuang Chiu, Antonio Prioletti, Jie Li, and Jeannette Bohg. Probabilistic 3d multi-object tracking for autonomous driving. arXiv:2001.05673, 2020. 2, 3, 6, 8

[6]. Chenxu Luo, Xiaodong Yang, Alan Yuille. Exploring Simple 3D Multi-Object Tracking for Autonomous Driving. ICCV, 2021.

[7]. C. Shorten, T.M Khoshgoftaar. A survey on Image Data Augmentation for Deep Learning. J Big Data 6, 60 (2019). https://doi.org/10.1186/s40537-019-0197-0

[8]. Jason Wang and Luis Perez. The Effectiveness of Data Augmentation in Image Classification using Deep Learning. CVPR, 2017, arXiv.1712.04621

[9]. Xinshuo Weng and Kris Kitani. A Baseline for 3D Multi-Object Tracking. IROS, 2020.2,3,5, 6C. Shorten, T.M Khoshgoftaar.

[10]. J. Redmon, A. Farhadi. YOLOv3: An Incremental Improvement. CVPR, 2018, arxiv:1804.02767

[11]. Tianwei Yin, Xingyi Zhou and Philipp Kr¨ahenb¨uhl. Multimodal Virtual Point 3D Detection. NeurIPS. 2021

[12]. Tianwei Yin, Xingyi Zhou and Philipp Kr¨ahenb¨uhl. Center-based 3D Object Detection and Tracking. CVPR, 2021

[13]. Jun-Yan Zhu, Taesung Park, Phillip Isola and Alexei A Efros. Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networkss. ICCV. 2017

[14]. Andreas Geiger, Philip Lenz, and Raquel Urtasun. Are we ready for autonomous driving? the kitti vision benchmark suite. CVPR, 2012.

[15]. Alex H. Lang, Sourabh Vora, Holger Caesar, Lubing Zhou, Jiong Yang, and Oscar Beijbom. Pointpillars: Fast encoders for object detection from point clouds. CVPR, 2019.

[16]. Ming Liang, Bin Yang, Yun Chen, Rui Hu, and Raquel Urtasun. Multi-task multi-sensor fusion for 3d object detection. CVPR, 2019.

[17]. Charles R Qi, Wei Liu, Chenxia Wu, Hao Su, and Leonidas J Guibas. Frustum pointnets for 3d object detection from rgb-d data. CVPR, 2018.

[18]. Yan Yan, Yuxing Mao, and Bo Li. Second: Sparsely embedded convolutional detection. Sensors, 2018.

[19]. Zetong Yang, Yanan Sun, Shu Liu, and Jiaya Jia. 3dssd: Point-based 3d single stage object detector. CVPR, 2020.

[20]. Zetong Yang, Yanan Sun, Shu Liu, Xiaoyong Shen, and Jiaya Jia. Std: Sparse-to-dense 3d object detector for point cloud. ICCV, 2019.

[21]. Yin Zhou and Oncel Tuzel. Voxelnet: End-to-end learning for point cloud based 3d object detection. CVPR, 2018.

[22]. K. He, G. Gkioxari, P. Doll´ar and R. Girshick, ”Mask R-CNN,” 2017 IEEE International Conference on Computer Vision (ICCV), 2017, pp. 2980-2988, doi: 10.1109/ICCV.2017.322.

[23]. Mayank Bansal, Alex Krizhevsky, and Abhijit Ogale. Chauffeurnet: Learning to drive by imitating the best and synthesizing the worst. RSS, 2019.

[24]. Ren S, He K, Girshick R, et al. Faster r-cnn: Towards real-time object detection with region proposal networks[J]. Advances in neural information processing systems, 2015, 28.

[25]. Duan K, Bai S, Xie L, et al. Centernet: Keypoint triplets for object detection[C]//Proceedings of the IEEE/CVF international conference on computer vision. 2019: 6569-6578.

Cite this article

Wei,Y. (2023). Point cloud densification via symmetry. Applied and Computational Engineering,13,13-20.

Data availability

The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.

Disclaimer/Publisher's Note

The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

About volume

Volume title: Proceedings of the 5th International Conference on Computing and Data Science

Conference website: https://2023.confcds.org/
ISBN:978-1-83558-017-2(Print) / 978-1-83558-018-9(Online)
Conference date: 14 July 2023
Editor:Roman Bauer, Marwan Omar, Alan Wang
Series: Applied and Computational Engineering
Volume number: Vol.13
ISSN:2755-2721(Print) / 2755-273X(Online)

© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license. Authors who publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See Open access policy for details).