Egocentric Landmark-Based Indoor Guidance System for the Visually Impaired

Egocentric Landmark-Based Indoor Guidance System for the Visually Impaired

Zhuorui Yang, Aura Ganz
Copyright: © 2017 |Pages: 15
DOI: 10.4018/IJEHMC.2017070104
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In this paper, we introduce an egocentric landmark-based guidance system that enables visually impaired users to interact with indoor environments. The user who wears Google Glasses will capture his surroundings within his field of view. Using this information, we provide the user an accurate landmark-based description of the environment including his relative distance and orientation to each landmark. To achieve this functionality, we developed a near real time accurate vision based localization algorithm. Since the users are visually impaired our algorithm accounts for captured images using Google Glasses that have severe blurriness, motion blurriness, low illumination intensity and crowd obstruction. We tested the algorithm performance in a 12,000 ft2 open indoor environment. When we have mint query images our algorithm obtains mean location accuracy within 5ft., mean orientation accuracy less than 2 degrees and reliability above 88%. After applying deformation effects to the query images such blurriness, motion blurriness and illumination changes, we observe that the reliability is above 75%.
Article Preview
Top

Introduction

According to the latest statistics reported by the World Health Organization (“Visual impairment and blindness”, 2016), there are 285 million blind and visually impaired (BVI) people worldwide. This community relies on white canes or guide dogs as mobility aids. Orientation and Mobility (O&M) instructors teach BVI users how to navigate in indoor spaces that they frequently use for studying, working or shopping. However, navigation in unfamiliar environments is a challenge which cannot be accomplished without a sighted guide.

In (Kaiser & Lawo, 2012) the authors introduce a wearable navigation system for visually impaired users in indoor and outdoor environments. The system components include a laser, inertial measurement unit (IMU), a wearable computer and audio bone headphones. The positioning algorithm uses Simultaneous Localization and Mapping (SLAM) algorithm and Pedestrian Dead Reckoning (PDR).

In (Zeb, Ullah, & Rabbi, 2014) the authors present a vision-based indoor auditory navigation system for visually impaired users that carry a webcam. By detecting the visual markers deployed in the environment, the users are able to receive turn-by-turn instructions based on shortest path algorithm.

In (Fukasawa & Magatani, 2012) the authors describe an indoor navigation system for the visually impaired using a modified white cane that includes color sensors and an RFID reader in addition to a microcontroller, a speaker and a vibrator. The system assumes that RFID tags and colored lines are deployed on the navigation paths on the floor. Using the modified white cane, the user can follow the marked navigation paths.

PERCEPT, a smartphone based indoor navigation system for the blind and visually impaired was introduced in (Ganz et al., 2011; Ganz, Schafer, Tao, Wilson & Robertson, 2014). PERCEPT provides detailed navigation instructions and assumes that Near Field Communication tags are deployed in the environment at specific landmarks. The system was deployed in large buildings as well as a subway station. The system was successfully tested with over 40 blind and visually impaired users.

Additional indoor navigation systems for BVI have been published using WiFi, RFID, Zigbee and Bluetooth technologies for positioning (Au et. al., 2013; Ganz, Gandhi, Wilson & Mullett, 2010; Pritt, 2013; Alghamdi, Van Schyndel & Alahmadi, 2013; Larranaga, Muguira, Lopez-Garde & Vazquez, 2010).

Using a vision-based positioning approach we obtain a number of advantages such as: a) sub-meter accuracy of the user location, 2) high accuracy estimation of the user orientation, and 3) low cost deployment and maintenance. We continue to discuss recently reported vision-based positioning algorithms that were specifically developed for BVI users. A review of recent vision based approaches is provided in Section II.

We introduce an accurate, reliable, scalable and real-time vision based indoor positioning algorithm that computes the user location and orientation. Using this information, we developed an indoor guidance system which provides visually impaired users a landmark based description of their surroundings. For each landmark, we also report its distance and the clock position relative to the user’s location and orientation.

The rapid proliferation of wearable devices, such as Google Glass, enables researchers to integrate them in systems used to augment and improve the perception of BVI users. In this paper, Google Glass serves as an egocentric vision system used to capture the user surroundings.

To the best of our knowledge, this is the first egocentric vision-based indoor positioning technique developed on a commercial Wearable device (e.g. Google Glass) that processes reference information from the point cloud instead of images and uses a non-iterative pose estimation algorithm, EPnP (Lepetit, Moreno-Noguer & Fua, 2009) to meet the requirements and limitations introduced above. Moreover, the proposed algorithm also is the first successfully tested with realistic image datasets that include deformations such as severe blurriness, motion blurriness, low illumination intensity and crowd obstruction.

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024): Forthcoming, Available for Pre-Order
Volume 14: 1 Issue (2023)
Volume 13: 5 Issues (2022): 4 Released, 1 Forthcoming
Volume 12: 6 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing