Information Fusion of Multi-Sensor Images

Information Fusion of Multi-Sensor Images

Yu-Jin Zhang
ISBN13: 9781605660264|ISBN10: 1605660264|EISBN13: 9781605660271
DOI: 10.4018/978-1-60566-026-4.ch307
Cite Chapter Cite Chapter

MLA

Zhang, Yu-Jin. "Information Fusion of Multi-Sensor Images." Encyclopedia of Information Science and Technology, Second Edition, edited by Mehdi Khosrow-Pour, D.B.A., IGI Global, 2009, pp. 1950-1956. https://doi.org/10.4018/978-1-60566-026-4.ch307

APA

Zhang, Y. (2009). Information Fusion of Multi-Sensor Images. In M. Khosrow-Pour, D.B.A. (Ed.), Encyclopedia of Information Science and Technology, Second Edition (pp. 1950-1956). IGI Global. https://doi.org/10.4018/978-1-60566-026-4.ch307

Chicago

Zhang, Yu-Jin. "Information Fusion of Multi-Sensor Images." In Encyclopedia of Information Science and Technology, Second Edition, edited by Mehdi Khosrow-Pour, D.B.A., 1950-1956. Hershey, PA: IGI Global, 2009. https://doi.org/10.4018/978-1-60566-026-4.ch307

Export Reference

Mendeley
Favorite

Abstract

The human perception to the outside world is the results of action among brain and many organs. For example, the intelligent robots that people currently investigate can have many sensors for sense of vision, sense of hearing, sense of taste, sense of smell, sense of touch, sense of pain, sense of heat, sense of force, sense of slide, sense of approach (Luo, 2002). All these sensors provide different profile information of scene in same environment. To use suitable techniques for assorting with various sensors and combining their obtained information, the theories and methods of multi-sensor fusion are required. Multi-sensor information fusion is a basic ability of human beings. Single sensor can only provide incomplete, un-accurate, vague, uncertainty information. Sometimes, information obtained by different sensors can even be contradictory. Human beings have the ability to combine the information obtained by different organs and then make estimation and decision for environment and events. Using computer to perform multi-sensor information fusion can be considered as a simulation of the function of human brain for treating complex problems. Multi-sensor information fusion consists of operating on the information data come from various sensors and obtaining more comprehensive, accurate, and robust results than that obtained from single sensor. Fusion can be defined as the process of combined treating of data acquired from multiple sensors, as well as assorting, optimizing and conforming of these data to increase the ability of extracting information and improving the decision capability. Fusion can extend the coverage for space and time information, reducing the fuzziness, increasing the reliability of making decision, and the robustness of systems. Image fusion is a particular type of multi-sensor fusion, which takes images as operating objects. In a more general sense of image engineering (Zhang, 2006), the combination of multi-resolution images also can be counted as a fusion process. In this article, however, the emphasis is put on the information fusion of multi-sensor images.

Request Access

You do not own this content. Please login to recommend this title to your institution's librarian or purchase it from the IGI Global bookstore.