Multi-Sensor Integrated Navigation in Urban and Indoor Environments: GNSS, Inertial, and Range Sensors Integration

Multi-Sensor Integrated Navigation in Urban and Indoor Environments: GNSS, Inertial, and Range Sensors Integration

Mohamed Atia (Carleton University, Canada)
Copyright: © 2018 |Pages: 46
DOI: 10.4018/978-1-5225-3528-7.ch010
OnDemand PDF Download:
List Price: $37.50


The art of multi-sensor processing, or “sensor-fusion,” is the ability to optimally infer state information from multiple noisy streams of data. One major application area where sensor fusion is commonly used is navigation technology. While global navigation satellite systems (GNSS) can provide centimeter-level location accuracy worldwide, they suffer from signal availability problems in dense urban environment and they hardly work indoors. While several alternative backups have been proposed, so far, no single sensor or technology can provide the desirable precise localization in such environments under reasonable costs and affordable infrastructures. Therefore, to navigate through these complex areas, combining sensors is beneficial. Common sensors used to augment/replace GNSS in complex environments include inertial measurement unit (IMU), range sensors, and vision sensors. This chapter discusses the design and implementation of tightly coupled sensor fusion of GNSS, IMU, and light detection and ranging (LiDAR) measurements to navigate in complex urban and indoor environments.
Chapter Preview


Currently, the dominant navigation technology is the Global Navigation Satellite Systems (GNSSs) (Misra & Enge, 2011) (Farrell, 2008) (Ahmed, 2002). GNSS-based localization has been significantly improved in the recent decade by having more satellite constellations and by developing augmentation networks (Misra & Enge, 2011) (e.g. Wide Area Augmentation Systems “WAAS”, Space Based Augmentation Systems “SBAS”). In addition to the U.S. system (the global positioning system “GPS”), several other satellite-based systems have been deployed or redeployed such as the Russian system (Global Navigation Satellite System “GLONASS”), the Chinese system (BeiDou Navigation Satellite System “BDS”), and the European satellite navigation system “Galileo”. Having several satellite constellations increases the availability and coverage of positioning service. For example, the GLONASS has better coverage in the northern latitudes which supplements the limited coverage of GPS satellites in these high latitude areas. Furthermore, increased number of satellites enhances the satellites-user geometry which consequently enhance location accuracy. While GNSSs provide accurate absolute positioning under open-sky, it suffers from signal attenuation and multi-path in dense urban areas. In indoor areas, signals transmitted by GNSS satellites are often not receivable anymore. Therefore, navigation in dense urban and indoor environments is still a challenge that cannot be addressed by a single navigation technology under reasonable costs and affordable infrastructures.

To fill this gap, several alternatives to GNSS have been proposed. One of the most common alternative to GNSS is Inertial navigation systems (INS) (Farrell, 2008) (Savage, 2007) (El-Sheimy, Hou, & Niu, 2007) (Titterton & Weston, 2004). INS is one of the earliest navigation technologies that can be used to design and develop self-contained independent navigation systems that can work in complex environments. The general concept of the integration (i.e. fusion) of GNSS and INS is the online recursive calibration of INS sensors for errors during strong GNSS coverage which is utilized later to enhance INS standalone performance when GNSS is lost or attenuated. However, inertial sensors (especially low-cost ones) have complex stochastic error characteristics that changes dynamically. Therefore, even a well-calibrated INS cannot sustain reliable accuracy for long periods without external measurements due to unavoidable sensors errors accumulation as the chapter will demonstrate later. To develop robust navigation systems that can sustain for long periods without GNSS, more alternatives are needed.

In mapping technology, range and vision sensors (Groves, 2013) (Geiger, Ziegler, & Stiller, 2011) are used to develop a map of the surroundings. Current mapping technology depends on an accurate positioning system to project range/vision sensors on a reference frame and, hence, construct a map of the environment. Without a reliable positioning system, an accurate map cannot be developed. To overcome this issue, the Simultaneous Localization and Mapping (SLAM) (Harmat, Trentini, & Sharf, 2015) (Huang & Dissanayake, 2007) (Luo & Lai, 2014) technology is used. To implement accurate SLAM, commonly post-processing of collected data is needed to perform a global optimization (including loop closures and bundle adjustments) to generate a precise map and an accurate record of the travelled trajectory. While SLAM is a beautiful approach that solves both localization and mapping at the same formulation, it may suffer from scalability and real-time challenges in large urban and indoor areas due to the accumulation of large amount of map features. Real-time SLAM-based localization may be feasible using fast SLAM algorithms such as “fastSLAM” (Montemerlo, Thrun, Koller, & Wegbreit, 2002) or if a map of the area pre-exists (Lynen, et al., 2015). However, in highly dynamic environments, maps may need frequent updates. To address this challenge, the integration of range/vision sensors in the online positioning process (i.e. filtering) has been proposed (Perera, Wijesoma, & Adams, 2010) (Luo & Lai, 2014). The idea is to use mapping sensors (either range or vision) as “motion” sensors (not only as environment perception/mapping sensors) and to fuse their motion information with other common sensors (e.g. GPS/INS). Therefore, this chapter will focus on the fusion of relative motion information extracted from LiDAR sensors with GNSS/INS assembly to enhance positioning in dense urban and indoor environments.

Complete Chapter List

Search this Book: