FPGA Technology for Implementation in Visual Sensor Networks

FPGA Technology for Implementation in Visual Sensor Networks

Wai Chong Chia, Wing Hong Ngau, Li-Minn Ang, Kah Phooi Seng, Li Wern Chew, Lee Seng Yeong
ISBN13: 9781613501535|ISBN10: 1613501544|EISBN13: 9781613501542
DOI: 10.4018/978-1-61350-153-5.ch014
Cite Chapter Cite Chapter

MLA

Chia, Wai Chong, et al. "FPGA Technology for Implementation in Visual Sensor Networks." Visual Information Processing in Wireless Sensor Networks: Technology, Trends and Applications, edited by Li-Minn Ang and Kah Phooi Seng, IGI Global, 2012, pp. 293-324. https://doi.org/10.4018/978-1-61350-153-5.ch014

APA

Chia, W. C., Ngau, W. H., Ang, L., Seng, K. P., Chew, L. W., & Yeong, L. S. (2012). FPGA Technology for Implementation in Visual Sensor Networks. In L. Ang & K. Seng (Eds.), Visual Information Processing in Wireless Sensor Networks: Technology, Trends and Applications (pp. 293-324). IGI Global. https://doi.org/10.4018/978-1-61350-153-5.ch014

Chicago

Chia, Wai Chong, et al. "FPGA Technology for Implementation in Visual Sensor Networks." In Visual Information Processing in Wireless Sensor Networks: Technology, Trends and Applications, edited by Li-Minn Ang and Kah Phooi Seng, 293-324. Hershey, PA: IGI Global, 2012. https://doi.org/10.4018/978-1-61350-153-5.ch014

Export Reference

Mendeley
Favorite

Abstract

A typical configuration of Visual Sensor Network (VSN) usually consists of a set of vision nodes, network motes, and a base station. The vision node is used to capture image data and transmit them to the nearest network mote. Then, the network motes will relay the data within the network until it reaches the base station. Since the vision node is usually small in size and battery-powered, it restricts the resources that can be incorporated onto it. In this chapter, a Field Programmable Gate Array (FPGA) implementation of a low-complexity and strip-based Microprocessor without Interlocked Pipeline Stage (MIPS) architecture is presented. In this case, the image data captured by the vision node is processed in a strip-by-strip manner to reduce the local memory requirement. This allows an image with higher resolution to be captured and processed with the limited resources. In addition, parallel access to the neighbourhood image data is incorporated to improve the accessing speed. Finally, the performance of visual saliency in using the proposed architecture is evaluated.

Request Access

You do not own this content. Please login to recommend this title to your institution's librarian or purchase it from the IGI Global bookstore.