FPGA Technology for Implementation in Visual Sensor Networks

FPGA Technology for Implementation in Visual Sensor Networks

Wai Chong Chia (The University of Nottingham Malaysia Campus, Malaysia), Wing Hong Ngau (The University of Nottingham Malaysia Campus, Malaysia), Li-Minn Ang (The University of Nottingham Malaysia Campus, Malaysia), Kah Phooi Seng (The University of Nottingham Malaysia Campus, Malaysia), Li Wern Chew (The University of Nottingham Malaysia Campus, Malaysia) and Lee Seng Yeong (The University of Nottingham Malaysia Campus, Malaysia)
DOI: 10.4018/978-1-61350-153-5.ch014
OnDemand PDF Download:
$30.00
List Price: $37.50

Abstract

A typical configuration of Visual Sensor Network (VSN) usually consists of a set of vision nodes, network motes, and a base station. The vision node is used to capture image data and transmit them to the nearest network mote. Then, the network motes will relay the data within the network until it reaches the base station. Since the vision node is usually small in size and battery-powered, it restricts the resources that can be incorporated onto it. In this chapter, a Field Programmable Gate Array (FPGA) implementation of a low-complexity and strip-based Microprocessor without Interlocked Pipeline Stage (MIPS) architecture is presented. In this case, the image data captured by the vision node is processed in a strip-by-strip manner to reduce the local memory requirement. This allows an image with higher resolution to be captured and processed with the limited resources. In addition, parallel access to the neighbourhood image data is incorporated to improve the accessing speed. Finally, the performance of visual saliency in using the proposed architecture is evaluated.
Chapter Preview
Top

Introduction

In the Wireless Sensor Network (WSN), a set of sensor nodes capable of wireless communication is distributed in different places to harvest the desired information. Then, the captured information is transmitted to the nearest network mote. The network mote will relay the information within the network until it reaches the base station. Initially, the WSN is used for harvesting simple information such as temperature, pressure, humidity, or location of an object (Akyildiz, Melodia & Chowdhury, 2006). This is due to the reasons that the sensor node is small in size and battery-powered. Because the nodes are battery-operated, power consumption is always a critical issue restricting the (a) type and the number of processing unit, (b) peripheral, and (c) memory that can be incorporated onto it.

Recently, the advancement in miniature technology such as the production of camera with low power consumption (Soro & Heinzelman, 2009) resulted in the extension of visual sensor node (visual node). This leads to the Visual Sensor Network (VSN), which is capable to capture image or video data. As a result, the amount of data that need to be processed by the visual node has significantly increased. It is also required for the data to be compressed prior to transmission due to limited bandwidth. This can help to achieve better quality in video streaming (Akyildiz, Melodia & Chowdhury, 2008) as well as to reduce the power consumption (Bhargava, Kargupta & Powers, 2003). It has been shown that the power consumed by transmitting data is much higher than processing them on-board (Raghunathan, Ganeriwal & Srivastava, 2004). Therefore, the function of the processing unit has become increasingly important. However, the trade-off between performance and power consumption should be taken into consideration in the selection of a processing unit for the visual node.

Throughout years of development, many commercial products have been produced to target different applications. Here, the processing unit incorporated onto the sensor node can range from a small microcontroller to a microprocessor. Ultimately, the type of processing unit to be used depends on the amount of data and the complexity of the processing. For example, a more powerful processing unit is preferred when the visual node needs to process a considerably large amount of image data. Majority of these processing units are developed based on the Reduced Instruction Set Computing (RISC) architecture, such as the ATMEGA128L microcontroller, PXA and ARM7 microprocessor. In this case, only a limited set of simple instructions that each can be executed efficiently with the help of pipelining is adopted (Luker & Prasad, 2001). Likewise, the Microprocessor without Interlocked Pipeline Stage (MIPS) architecture that served as the basis in this chapter is also a type of RISC architecture.

In this chapter, a Field Programmable Gate Array (FPGA) implementation of a low-complexity and strip-based MIPS architecture targeted at the visual node is presented. This architecture is designed in favor of the strip-based processing (Chew, Chia, Ang & Seng, 2009). The basic idea of the strip-based processing is to divide the captured image into number of strips and carry out the processing in a strip-by-strip manner as shown in Figure 1. In this case, the captured image is first stored into the external memory. Then, a single strip will be transferred to the local memory, one at a time for processing. The procedure is repeated until the entire image has been processed. This method helps to reduce the requirement on local memory and allows image with higher resolution to be processed with limited resources.

Figure 1.

The basic of strip-based processing

Complete Chapter List

Search this Book:
Reset