Implementation of Biologically Inspired Components in Embedded Vision Systems

Implementation of Biologically Inspired Components in Embedded Vision Systems

Christopher Wing Hong Ngau, Li-Minn Ang, Kah Phooi Seng
DOI: 10.4018/978-1-4666-2539-6.ch013
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Studies in the area of computational vision have shown the capability of visual attention (VA) processing in aiding various visual tasks by providing a means for simplifying complex data handling and supporting action decisions using readily available low-level features. Due to the inclusion of computational biological vision components to mimic the mechanism of the human visual system, VA processing is computationally complex with heavy memory requirements and is often found implemented in workstations with unapplied resource constraints. In embedded systems, the computational capacity and memory resources are of a primary concern. To allow VA processing in such systems, the chapter presents a low complexity, low memory VA model based on an established mainstream VA model that addresses critical factors in terms of algorithm complexity, memory requirements, computational speed, and salience prediction performance to ensure the reliability of the VA processing in an environment with limited resources. Lastly, a custom softcore microprocessor-based hardware implementation on a Field-Programmable Gate Array (FPGA) is used to verify the implementation feasibility of the presented low complexity, low memory VA model.
Chapter Preview
Top

Introduction

Imaging and semiconductor technologies have matured substantially throughout the previous decade, paving the way for revolutionary vision applications. Whenever static images or videos are of concern, most vision systems perform some form of processing on the immense amount of visual information received in order to remove data redundancies or to select useful information to be used in subsequent actions of the system. While vision systems are given artificial vision by means of image processing algorithms, research efforts are constantly undertaken to create intelligent vision systems which are able to interpret visual information similar to humans. Even for current vision systems, the selection of useful information from a large information pool can be rather complicated. Taking a system performing the task of object detection and recognition, for example, multiple sliding windows and trained classifiers have to be used to determine regions of the visual input that consist parts of an object before the object can be recognized (Frintrop, 2011). Although the approach is fairly straightforward for visual inputs containing simple objects, specifically trained classifiers are required for complex visual scenes. Furthermore, the multiple sliding window approach may not be efficient for large visual inputs. Therefore, there is a need for an approach that can efficiently select relevant information to be used in higher processing such as classification, intelligent compression, and recognition without relying on pre-determined data or parameters.

Research in visual attention (VA) provides insight into the challenges faced by many vision systems by identifying priority regions in the visual input without any prior training through the use of artificial visual attention processing. The artificial visual attention processing is achieved by a VA model comprised of biologically inspired components of the retina and the V1 cortex. The use of VA in vision systems also helps to aid action decisions (Frintrop, 2011). Often, vision systems operate in complex and probably unknown environments. Therefore, it is necessary for the system to locate and interpret relevant parts of the visual scene to decide on the actions to be taken. VA allows relevant parts to be detected with ease using models of attention based on the human visual system. Various studies on VA related vision processing have shown the importance of implementing an attentional mechanism in a general vision system working with complex visual data (Frintrop, 2011; Begum & Karray, 2011; Frintrop & Jensfelt, 2008).

While VAs offered an artificial visual attention mechanism that can be applied to a wide range of vision systems, it has a disadvantage in terms of computational complexity. Complex and parallel algorithm structures of the VA models have to be taken into consideration when implementing in an environment with limited computational capabilities. Computational time of the VA processing increases by three to four folds when processed in a serial manner and therefore, has to be evaluated for feasibility in real-time applications. Furthermore, blob-like detected conspicuous regions seen in early VA models (Itti, Koch, & Neibur, 1998; Ma & Zhang, 2003) are being out-phased by high resolution outputs in newer models to serve a wider range of vision applications. This indirectly resulted in the increase of computational cost and memory requirements (Huang, He, Cai, Zou, Liu, Liang, & Chen, 2011); hence, making the implementation of VA in embedded systems challenging. In order to utilize the advantages of the attentional mechanism for vision processing in a resource constrained system, a low complexity, low memory VA model has be first developed without compromising on prediction performance.

Complete Chapter List

Search this Book:
Reset