Adaptive Acquisition and Visualization of Point Cloud Using Airborne LIDAR and Game Engine

Adaptive Acquisition and Visualization of Point Cloud Using Airborne LIDAR and Game Engine

Chengxuan Huang, Evan Brock, Dalei Wu, Yu Liang
DOI: 10.4018/IJMDEM.332881
Article PDF Download
Open access articles are freely available for download

Abstract

The development of digital twin for smart city applications requires real-time monitoring and mapping of urban environments. This work develops a framework of real-time urban mapping using an airborne light detection and ranging (LIDAR) agent and game engine. In order to improve the accuracy and efficiency of data acquisition and utilization, the framework is focused on the following aspects: (1) an optimal navigation strategy using Deep Q-Network (DQN) reinforcement learning, (2) multi-streamed game engines employed in visualizing data of urban environment and training the deep-learning-enabled data acquisition platform, (3) dynamic mesh used to formulate and analyze the captured point-cloud, and (4) a quantitative error analysis for points generated with our experimental aerial mapping platform, and an accuracy analysis of post-processing. Experimental results show that the proposed DQN-enabled navigation strategy, rendering algorithm, and post-processing could enable a game engine to efficiently generate a highly accurate digital twin of an urban environment.
Article Preview
Top

1. Introduction

The development of the digital twin of an urban environment requires live streaming of measurement data to maintain real-time accurate representation of the environment. It is preferable that the measurements are collected in a continuous and automated fashion. Machine learning enabled measurement instruments serve the purpose of urban data acquisition well as they need minimal labor work. For example, automated airborne LIDARs are desirable in real-time urban mapping as they provide continuous high-resolution ranging and depth information by illuminating the object/environment with laser light and measuring the reflection with a sensor component. However, it is usually costly to train such systems in a real-world urban setting.

The development of the digital twin also calls for real-time visualization of heterogeneous data, especially live streaming data (Saddik, 2018). Visualization plays a critical role in the development of digital twins. Game engines are often used to render large, detailed, 3D environments, the same kind that geospatial experts seek to replicate (Andrews, 2020). The coordinate system within any game engine can be used to replicate 3D localization of objects and terrain, while taking advantage of their optimization and portability. Both interactable and performant, game engines seem to be the perfect candidate to visualize and interact with the geographic environment, and thus are a near perfect candidate to visualize urban environment (Rusu, 2018). Industry clearly agrees on the aspect. For example, both Google and Mapbox have built APIs and SDKs to bring their infrastructure and frameworks into the Unity game engine (Google, 2021; Mapbox, 2021).

But game engines are simply not built to handle live streaming data from unsupported objects, nor are they built to render dynamically changing meshes defined by live streaming data. Most previous work with LIDARs seeks to localize an object through deducing their own location through LIDAR data (Chong, 2013a; Chong, 2013b). Other work uses a combination of telemetry sensors and LIDAR data to achieve the same purpose (Toroslu, 2018). While these approaches work well for object detection or short-term scans, they do not support collaborative scans where multiple scans can be stitched together automatically through the geographical significance of their vertices in any three-dimensional environment.

In this paper, we describe a framework connecting IoT devices to game engines with point cloud pre-processing and post-processing techniques for surface reconstruction in urban mapping. The framework allows for the reconstruction of an environment to be observed in real-time. Different from existing work on mapping using LIDAR data (Agrawal, 2017), our implementation of large-scale live maps inside a game engine has proven to be very intuitive in testing. We also propose to use a game engine to generate a virtual urban environment where an airborne LIDAR agent equipped with DQN reinforcement learning is trained. This offers a safe, efficient, and low-cost approach for the training of the LIDAR agent. When the trained LIDAR agent is deployed in real urban environments, the game engine is also able to visualize the collected LIDAR data in surface reconstruction.

The remainder of this paper is organized as follows: Section 2 presents the proposed framework and workflow. Section 3 addresses LIDAR data processing. Section 4 discusses game engine enabled virtual training of the LIDAR agent for optimal navigation. Section 5 discusses the LIDAR data error sources. Section 6 presents both simulation and field experimental results. Section 7 concludes this paper and lists the future work.

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024)
Volume 14: 1 Issue (2023)
Volume 13: 4 Issues (2022): 1 Released, 3 Forthcoming
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing