Virtual reality; 3D Point cloud; Octree data structure; Segmentation; Spatial Indexation; Classification
Abstract :
[en] With the increasing volume of 3D applications using immersive technologies such as virtual, augmented and mixed reality, it is very interesting to create better ways to integrate unstructured 3D data such as point clouds as a source of data. Indeed, this can lead to an efficient workflow from 3D capture to 3D immersive environment creation without the need to derive 3D model, and lengthy optimization pipelines. In this paper, the main focus is on the direct classification and integration of massive 3D point clouds in a virtual reality (VR) environment. The emphasis is put on leveraging open-source frameworks for an easy replication of the findings. First, we develop a semi-automatic segmentation approach to provide semantic descriptors (mainly classes) to groups of points. We then build an octree data structure leveraged through out-of-core algorithms to load in real time and continuously only the points that are in the VR user's field of view. Then, we provide an open-source solution using Unity with a user interface for VR point cloud interaction and visualisation. Finally, we provide a full semantic VR data integration enhanced through developed shaders for future spatio-semantic queries. We tested our approach on several datasets of which a point cloud composed of 2.3 billion points, representing the heritage site of the castle of Jehay (Belgium). The results underline the efficiency and performance of the solution for visualizing classifieds massive point clouds in virtual environments with more than 100 frame per second.
Boucheny, C., Ribes, A., 2017. Eye-Dome Lighting: a nonphotorealistic shading technique.
CloudCompare, 2019. CloudCompare 3D point cloud and mesh processing software Open Source Project. cloudcompare.org (Accessed 25 August 2019).
Fraiss, S.M., 2017. Rendering Large Point Clouds in Unity, bachelor thesis.
GeoSLAM, 2019. GeoSLAM ZEB REVO. https://geoslam.com/solutions/zeb-revo/ (Accessed 25 August 2019).
Leica, 2019. Leica P30. https://leica-geosystems.com/frma/products/laser-scanners/scanners/leica-scanstation-p40-p30 (Accessed 25 August 2019).
Mures, O.A., Jaspe, A., Padrón, E.J., Rabuñal, J.R., 2016. Virtual Reality and Point-Based Rendering in Architecture and Heritage.
NavVIS, 2019. NavVIS M6. https://www.navvis.com/m6 (Accessed 25 August 2019).
Nebiker, S., Bleisch, S., Christen, M., 2010. Rich point clouds in virtual globes - A new paradigm in city modeling? Comput. Environ. Urban Syst. 34, 508–517.
Oculus, 2019. Oculus Rift. https://www.oculus.com/rift/ (Accessed 25 August 2019).
Potree, 2019. PotreeConverter. https://github.com/potree/PotreeConverter (Accessed 25 August 2019).
Poux, F., 2019. The smart point cloud Structuring 3D intelligent point data.
Poux, F., Billen, R., 2019a. Voxel-based 3D Point Cloud Semantic Segmentation: Unsupervised Geometric and Relationship Featuring vs Deep Learning Methods. ISPRS Int. J. Geo-Information 8, 213.
Poux, F., Billen, R., 2019b. A Smart Point Cloud Infrastructure for intelligent environments, in: Lindenbergh, R., Belen, R. (Eds.), Laser Scanning: An Emerging Technology in Structural Engineering, ISPRS Book Series. Taylor & Francis Group/CRC Press, United States.
Poux, F., Hallot, P., Neuville, R., Billen, R., 2016a. Smart point cloud: definition and remaining challenges. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. IV-2/W1, 119–127.
Poux, F., Neuville, R., Hallot, P., Billen, R., 2017. Model for semantically rich point cloud data. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. IV-4/W5, 107–115.
Poux, F., Neuville, R., Hallot, P., Billen, R., 2016b. Point clouds as an efficient multiscale layered spatial representation, in: Vincent, T., Biljecki, F. (Eds.), Eurographics Workshop on Urban Data Modelling and Visualisation. The Eurographics Association, Liège, Belgium.
Poux, Florent, Neuville, R., Van Wersch, L., Nys, G.-A., Billen, R., 2017. 3D Point Clouds in Archaeology: Advances in Acquisition, Processing and Knowledge Integration Applied to Quasi-Planar Objects. Geosciences 7, 96.
Scheiblauer, C., der Arbeit, V., Wimmer, M., Gervautz, M., gang Knecht, W.-, Marek, S., Hebart, G., Gschwantner, F.-M., Zimmer, N., Mayer, I., Fugger, V., Barsukov, Y., Preiner, R., Mayer, J., Pregesbauer, M., Tragust, M., Arikan, M., 2014. Interactions with Gigantic Point Clouds.
Scheiblauer, C., Wimmer, M., 2011. Out-of-core selection and editing of huge point clouds. Comput. Graph. 35, 342–351.
Schuetz, M., 2016. Potree: Rendering Large Point Clouds in Web Browsers 84.
Schütz, M., Krösl, K., Wimmer, M., 2019. Real-Time Continuous Level of Detail Rendering of Point Clouds. IEEE VR 2019, 26th IEEE Conf. Virtual Real. 3D User Interfaces 1–8.
Unity, 2018. Unity Personal. https://unity3d.com/fr/getunity/download (Accessed 25 August 2019).
Wahl, R., Klein, R., 2007. Efficient RANSAC for Point-Cloud Shape Detection 0, 1–12.
Whyte, J., 2018. Industrial applications of virtual reality in architecture and construction.
Zhang, W., Qi, J., Wan, P., Wang, H., Xie, D., Wang, X., Yan, G., 2016. An easy-to-use airborne LiDAR data filtering method based on cloth simulation. Remote Sens. 8, 1–22.