The human perception to the outside world is the results of action among brain and many organs. For example, the intelligent robots that people currently investigate can have many sensors for sense of vision, sense of hearing, sense of taste, sense of smell, sense of touch, sense of pain, sense of heat, sense of force, sense of slide, sense of approach (Luo, 2002). All these sensors provide different profile information of scene in same environment. To use suitable techniques for assorting with various sensors and combining their obtained information, the theories and methods of multi-sensor fusion are required. Multi-sensor information fusion is a basic ability of human beings. Single sensor can only provide incomplete, un-accurate, vague, uncertainty information. Sometimes, information obtained by different sensors can even be contradictory. Human beings have the ability to combine the information obtained by different organs and then make estimation and decision for environment and events. Using computer to perform multi-sensor information fusion can be considered as a simulation of the function of human brain for treating complex problems. Multi-sensor information fusion consists of operating on the information data come from various sensors and obtaining more comprehensive, accurate, and robust results than that obtained from single sensor. Fusion can be defined as the process of combined treating of data acquired from multiple sensors, as well as assorting, optimizing and conforming of these data to increase the ability of extracting information and improving the decision capability. Fusion can extend the coverage for space and time information, reducing the fuzziness, increasing the reliability of making decision, and the robustness of systems. Image fusion is a particular type of multi-sensor fusion, which takes images as operating objects. In a more general sense of image engineering (Zhang, 2006), the combination of multi-resolution images also can be counted as a fusion process. In this article, however, the emphasis is put on the information fusion of multi-sensor images.
There are many modalities for capture image and video, which use various sensors and techniques (Brakenhoff, 1979; Committee, 1996; Bertero, 1998), such as visible light sensor (CCD, CMOS), infrared sensor, depth sensor, con-focal scanning light microscopy (CSLM), a variety of computer tomography techniques (CT, ECT, SPECT), magnetic resonance imaging (SAR), synthesis aperture radar, millimeter wave radar (MMWR), etc.
Key Terms in this Chapter
Evidence reasoning fusion: A new method for fusing information from different sensors. It also called D-S theory. It can be used for feature level fusion and decision level fusion.
Information Fusion: Combined process of information from the source of same object or scene to obtain more complex, reliable and accurate information.
Fusion based on rough set theory: A fusion method for decision level. Instead of exact set, it uses rough set to manipulate sensor data. It can compress redundant information so avoid the composition exploitation problem during fusion procedure.
Subjective evaluation of image fusion results: To judge the quality of image fusion with subjects’ perception on fusion results.
Objective evaluation of image fusion results: To judge the quality of image fusion with some computable metrics based on fusion results.
Bayesian fusion: A probabilistic method for fusing information from different sensors. It can be used for feature level fusion and decision level fusion.
Sensor model: An abstract representation of the physical sensors and its information manipulation process.
Image Engineering: An integrated discipline/subject comprising the study of all the different branches of image and video techniques. It mainly consists of three levels: Image Processing, Image analysis, Image understanding.