Article Preview
Top1. Introduction
The development of the digital twin of an urban environment requires live streaming of measurement data to maintain real-time accurate representation of the environment. It is preferable that the measurements are collected in a continuous and automated fashion. Machine learning enabled measurement instruments serve the purpose of urban data acquisition well as they need minimal labor work. For example, automated airborne LIDARs are desirable in real-time urban mapping as they provide continuous high-resolution ranging and depth information by illuminating the object/environment with laser light and measuring the reflection with a sensor component. However, it is usually costly to train such systems in a real-world urban setting.
The development of the digital twin also calls for real-time visualization of heterogeneous data, especially live streaming data (Saddik, 2018). Visualization plays a critical role in the development of digital twins. Game engines are often used to render large, detailed, 3D environments, the same kind that geospatial experts seek to replicate (Andrews, 2020). The coordinate system within any game engine can be used to replicate 3D localization of objects and terrain, while taking advantage of their optimization and portability. Both interactable and performant, game engines seem to be the perfect candidate to visualize and interact with the geographic environment, and thus are a near perfect candidate to visualize urban environment (Rusu, 2018). Industry clearly agrees on the aspect. For example, both Google and Mapbox have built APIs and SDKs to bring their infrastructure and frameworks into the Unity game engine (Google, 2021; Mapbox, 2021).
But game engines are simply not built to handle live streaming data from unsupported objects, nor are they built to render dynamically changing meshes defined by live streaming data. Most previous work with LIDARs seeks to localize an object through deducing their own location through LIDAR data (Chong, 2013a; Chong, 2013b). Other work uses a combination of telemetry sensors and LIDAR data to achieve the same purpose (Toroslu, 2018). While these approaches work well for object detection or short-term scans, they do not support collaborative scans where multiple scans can be stitched together automatically through the geographical significance of their vertices in any three-dimensional environment.
In this paper, we describe a framework connecting IoT devices to game engines with point cloud pre-processing and post-processing techniques for surface reconstruction in urban mapping. The framework allows for the reconstruction of an environment to be observed in real-time. Different from existing work on mapping using LIDAR data (Agrawal, 2017), our implementation of large-scale live maps inside a game engine has proven to be very intuitive in testing. We also propose to use a game engine to generate a virtual urban environment where an airborne LIDAR agent equipped with DQN reinforcement learning is trained. This offers a safe, efficient, and low-cost approach for the training of the LIDAR agent. When the trained LIDAR agent is deployed in real urban environments, the game engine is also able to visualize the collected LIDAR data in surface reconstruction.
The remainder of this paper is organized as follows: Section 2 presents the proposed framework and workflow. Section 3 addresses LIDAR data processing. Section 4 discusses game engine enabled virtual training of the LIDAR agent for optimal navigation. Section 5 discusses the LIDAR data error sources. Section 6 presents both simulation and field experimental results. Section 7 concludes this paper and lists the future work.