Egocentric Landmark-Based Indoor Guidance System for the Visually Impaired

Egocentric Landmark-Based Indoor Guidance System for the Visually Impaired

Zhuorui Yang (University of Massachusetts Amherst, USA) and Aura Ganz (University of Massachusetts Amherst, USA)
Copyright: © 2018 |Pages: 17
DOI: 10.4018/978-1-5225-5204-8.ch061

Abstract

In this paper, we introduce an egocentric landmark-based guidance system that enables visually impaired users to interact with indoor environments. The user who wears Google Glasses will capture his surroundings within his field of view. Using this information, we provide the user an accurate landmark-based description of the environment including his relative distance and orientation to each landmark. To achieve this functionality, we developed a near real time accurate vision based localization algorithm. Since the users are visually impaired our algorithm accounts for captured images using Google Glasses that have severe blurriness, motion blurriness, low illumination intensity and crowd obstruction. We tested the algorithm performance in a 12,000 ft2 open indoor environment. When we have mint query images our algorithm obtains mean location accuracy within 5ft., mean orientation accuracy less than 2 degrees and reliability above 88%. After applying deformation effects to the query images such blurriness, motion blurriness and illumination changes, we observe that the reliability is above 75%.
Chapter Preview
Top

Introduction

According to the latest statistics reported by the World Health Organization (“Visual impairment and blindness”, 2016), there are 285 million blind and visually impaired (BVI) people worldwide. This community relies on white canes or guide dogs as mobility aids. Orientation and Mobility (O&M) instructors teach BVI users how to navigate in indoor spaces that they frequently use for studying, working or shopping. However, navigation in unfamiliar environments is a challenge which cannot be accomplished without a sighted guide.

In (Kaiser & Lawo, 2012) the authors introduce a wearable navigation system for visually impaired users in indoor and outdoor environments. The system components include a laser, inertial measurement unit (IMU), a wearable computer and audio bone headphones. The positioning algorithm uses Simultaneous Localization and Mapping (SLAM) algorithm and Pedestrian Dead Reckoning (PDR).

In (Zeb, Ullah, & Rabbi, 2014) the authors present a vision-based indoor auditory navigation system for visually impaired users that carry a webcam. By detecting the visual markers deployed in the environment, the users are able to receive turn-by-turn instructions based on shortest path algorithm.

In (Fukasawa & Magatani, 2012) the authors describe an indoor navigation system for the visually impaired using a modified white cane that includes color sensors and an RFID reader in addition to a microcontroller, a speaker and a vibrator. The system assumes that RFID tags and colored lines are deployed on the navigation paths on the floor. Using the modified white cane, the user can follow the marked navigation paths.

PERCEPT, a smartphone based indoor navigation system for the blind and visually impaired was introduced in (Ganz et al., 2011; Ganz, Schafer, Tao, Wilson & Robertson, 2014). PERCEPT provides detailed navigation instructions and assumes that Near Field Communication tags are deployed in the environment at specific landmarks. The system was deployed in large buildings as well as a subway station. The system was successfully tested with over 40 blind and visually impaired users.

Additional indoor navigation systems for BVI have been published using WiFi, RFID, Zigbee and Bluetooth technologies for positioning (Au et. al., 2013; Ganz, Gandhi, Wilson & Mullett, 2010; Pritt, 2013; Alghamdi, Van Schyndel & Alahmadi, 2013; Larranaga, Muguira, Lopez-Garde & Vazquez, 2010).

Using a vision-based positioning approach we obtain a number of advantages such as: a) sub-meter accuracy of the user location, 2) high accuracy estimation of the user orientation, and 3) low cost deployment and maintenance. We continue to discuss recently reported vision-based positioning algorithms that were specifically developed for BVI users. A review of recent vision based approaches is provided in Section II.

We introduce an accurate, reliable, scalable and real-time vision based indoor positioning algorithm that computes the user location and orientation. Using this information, we developed an indoor guidance system which provides visually impaired users a landmark based description of their surroundings. For each landmark, we also report its distance and the clock position relative to the user’s location and orientation.

The rapid proliferation of wearable devices, such as Google Glass, enables researchers to integrate them in systems used to augment and improve the perception of BVI users. In this paper, Google Glass serves as an egocentric vision system used to capture the user surroundings.

To the best of our knowledge, this is the first egocentric vision-based indoor positioning technique developed on a commercial Wearable device (e.g. Google Glass) that processes reference information from the point cloud instead of images and uses a non-iterative pose estimation algorithm, EPnP (Lepetit, Moreno-Noguer & Fua, 2009) to meet the requirements and limitations introduced above. Moreover, the proposed algorithm also is the first successfully tested with realistic image datasets that include deformations such as severe blurriness, motion blurriness, low illumination intensity and crowd obstruction.

Complete Chapter List

Search this Book:
Reset