3D Point cloud; Semantic information; Feature extraction,; point cloud representation; Deep learning; Image recognition
Abstract :
[en] The raw nature of point clouds is an important challenge for their direct exploitation in architecture, engineering and construction applications. Particularly, their lack of semantics hinders their utility for automatic workflows (Poux, 2019). In addition, the volume and the irregularity of the structure of point clouds makes it difficult to directly and automatically classify datasets efficiently, especially when compared to the state-of-the art 2D raster classification. Recently, with the advances in deep learning models such as convolutional neural networks (CNNs) , the performance of image-based classification of remote sensing scenes has improved considerably (Chen et al., 2018; Cheng et al., 2017). In this research, we examine a simple and innovative approach that represent large 3D point clouds through multiple 2D projections to leverage learning approaches based on 2D images. In other words, the approach in this study proposes an automatic process for extracting 360° panoramas, enhancing these to be able to leverage raster data to obtain domain-base semantic enrichment possibilities. Indeed, it is very important to obtain a rigorous characterization for use in the classification of a point cloud. Especially because there is a very large variety of 3D point cloud domain applications. In order to test the adequacy of the method and its potential for generalization, several tests were performed on different datasets. The developed semantic augmentation algorithm uses only the attributes X, Y, Z and camera positions as inputs.
Chen, Z., Wang, S., Hou, X., Shao, L., Org, L.S., 2018. Recurrent Transformer Networks for Remote Sensing Scene Categorisation.
Cheng, G., Han, J., Lu, X., 2017. Remote Sensing Image Scene Classification: Benchmark and State of the Art. Proc. IEEE 105, 1865–1883. https://doi.org/10.1109/JPROC.2017.2675998
David, S., 2006. Transformations and Projections in Computer Graphics, Science.
Guinard, S., Landrieu, L., Vallet, B., 2017. Pré-segmentation pour la classification faiblement supervisée de scènes urbaines à partir de nuages de points 3D LIDAR.
Kubany, A., Ben Ishay, S., Ohayon, R.-S., Shmilovici, A., Rokach, L., Doitshman, T., 2019. Semantic Comparison of State-of-the-Art Deep Learning Methods for Image Multi-Label Classification. arXiv Prepr. arXiv 1–10.
Morton, P., Douillard, B., Underwood, J., 2011. An evaluation of dynamic object tracking with 3D LIDAR. Proc. 2011 Australas. Conf. Robot. Autom. 7–9.
Poux, F., 2019. The Smart Point Cloud: Structuring 3D intelligent point data. Liège.
Poux, F., Billen, R., 2019a. A Smart Point Cloud Infrastructure for intelligent environments, in: Lindenbergh, R., Belen, R. (Eds.), Laser Scanning: An Emerging Technology in Structural Engineering, ISPRS Book Series. Taylor & Francis Group/CRC Press, United States. https://doi.org/in generation
Poux, F., Billen, R., 2019b. Voxel-Based 3D Point Cloud Semantic Segmentation: Unsupervised Geometric and Relationship Featuring vs Deep Learning Methods. ISPRS Int. J. Geo-Information 8, 213. https://doi.org/10.3390/ijgi8050213
Poux, F., Hallot, P., Neuville, R., Billen, R., 2016. SMART POINT CLOUD: DEFINITION AND REMAINING CHALLENGES. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. IV-2/W1, 119–127. https://doi.org/10.5194/isprsannals-IV-2-W1-119-2016
Poux, F., Neuville, R., Nys, G.-A., Billen, R., 2018. 3D Point Cloud Semantic Modelling: Integrated Framework for Indoor Spaces and Furniture. Remote Sens. 10, 1412. https://doi.org/10.3390/rs10091412
Poux, F., Neuville, R., Van Wersch, L., Nys, G.-A., Billen, R., 2017. 3D Point Clouds in Archaeology: Advances in Acquisition, Processing and Knowledge Integration Applied to Quasi-Planar Objects. Geosciences 7, 96. https://doi.org/10.3390/geosciences7040096