Depth-Vision Coordinated Robust Architecture for Obstacle Detection and Haptic Feedback

Depth-Vision Coordinated Robust Architecture for Obstacle Detection and Haptic Feedback

Alexander Forde, Kevin Laubhan, Kumar Yelamarthi
Copyright: © 2015 |Pages: 14
DOI: 10.4018/IJHCR.2015040102
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The lightweight and low-cost 3-dimensional depth sensors have gained much attention in the computer vision and gaming industry. While its performance has been proven successful in the gaming industry, these sensors have not been utilized successfully for assistive devices. Leveraging on this gap, this paper presents the design, implementation, and evaluation of a depth-vision coordinated robust architecture for obstacle detection and haptic feedback system for the blind. The proposed system scans the scene in front, converts it into depth matrix, processes the information, to identify obstacles including physical objects and humans, and provide relevant haptic feedback for navigation of the blind. Through design and evaluation, the proposed system has shown to successfully identify objects and humans, perform real-time distance measurements, and provide a working solution.
Article Preview
Top

Introduction

People with special needs typically do resort to support devices while performing their daily tasks. The advancement in technology accommodated for the design and implementation of numerous support devices, supplementing traditional devices such as white cane or guide dog for the blind. Acquiring information from the environment using sensors, and processing this information using portable computational units in real-time has become feasible over the past few years. The World Health Organization (2014) estimates that 285 million people are visually impaired worldwide and the US Census Bureau reported that 54 million people live with disabilities (National Council on Disability, 2013). Many of these individuals live and interact independently, but the majority lack the ability to perform certain actions by themselves. Major issues these individuals face is their inability to process surroundings and identify obstacles in their daily commute. These challenges have led to the introduction of many assistive devices for navigation assistance, with the two most common being white canes and guide dogs (Kirchner, 1995). The white cane is only partially effective as it does not detect objects above the knee height, and does not provide cues in sufficient time to avoid a collision in a populated area. A major problem, as stated by Vincent Cerf, vice president and chief Internet evangelist for Google, is that, “Everybody is not the same; the same solution does not work for everyone.” (Zielinski, 2014). This kind of roadblock has caused a standby in certain types of research, primarily ones dealing with complex disabilities, and in this case, visual impairment. This technological pause could promote financial incentive for companies to pave a path of improvement and market a product that offers near-universal incentive for its users.

The advancement in computer vision technology provides a solution for this challenge. In the recent years, many developments have been made, and solutions proposed for obstacle detection. Bousbia-Salah et al. (2011) proposed a method of detecting obstacles on the ground through embedded ultrasonic sensor on the white cane and the user’s shoulders. Brock and Kristensson (2013) presented vibrotactile belt to relay and position and distance information of an obstacle using the position and intensity of vibration. Shoval et al. (1998) also proposed a navigation belt comprising of an array of ultrasonic sensor to prompt the user of nearby obstacles, but it is not an ideal method for operation in popular and dynamically changing environments. Ma et al. (2009) proposed an object detection algorithm that uses edges and motion to detect dynamic obstacles. Castellas et al. (2010) used a vision sensor to detect various obstacles, supplemented by a traditional white cane. Images were used to detect sidewalk borders and obstacles in a preset window, but lose accuracy in denser environments. Yankun et al. (2011) presented an obstacle detection algorithm based on edge detection of the image obtained using a single camera. The authors Zeng et al. (2012) and Ishiwata et al. (2013) presented an exploration and avoidance system with haptic feedback, but uses a time-of-flight camera for obstacle detection, which is expensive and limits the adoptability due to economic constraints. Sainarayanan et al. (2007) presented a fuzzy clustering based algorithm to locate obstacles and provide feedback to the user through headphones, but their system requires high processing power and could be difficult for the user to distinguish sounds in a loud environment. Filipe et al. (2012) presented a depth sensor based system, but did not present any mechanism to provide feedback to the user for navigation assistance. Khoshelham and Elberink (2012) presented the depth-sensor based approach for indoor mapping of objects, but did not include a feedback system for navigation either. Han presented an overview of applications from a similar depth sensor, but did not any solution of navigation and feedback like works done by Filipe et al. (2012) and Khoshelham and Elberink (2012). Dakopoulos (2007) designed a vibration array device that can that identify the objects in front, and relay the respective obstacle information to the user through vibration array mounted on the user chest.

Complete Article List

Search this Journal:
Reset
Open Access Articles: Forthcoming
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing