Improving Multimodality Image Fusion through Integrate AFL and Wavelet Transform

Improving Multimodality Image Fusion through Integrate AFL and Wavelet Transform

Girraj Prasad Rathor (AISECT University, India) and Sanjeev Kumar Gupta (AISECT University, India)
Copyright: © 2017 |Pages: 15
DOI: 10.4018/978-1-5225-0536-5.ch008


Image fusion based on different wavelet transform is the most commonly used image fusion method, which fuses the source pictures data in wavelet space as per some fusion rules. But, because of the uncertainties of the source images contributions to the fused image, to design a good fusion rule to incorporate however much data as could reasonably be expected into the fused picture turns into the most vital issue. On the other hand, adaptive fuzzy logic is the ideal approach to determine uncertain issues, yet it has not been utilized as a part of the outline of fusion rule. A new fusion technique based on wavelet transform and adaptive fuzzy logic is introduced in this chapter. After doing wavelet transform to source images, it computes the weight of each source images coefficients through adaptive fuzzy logic and then fuses the coefficients through weighted averaging with the processed weights to acquire a combined picture: Mutual Information, Peak Signal to Noise Ratio, and Mean Square Error as criterion.
Chapter Preview


Multimodalities data fusion has become a discipline to which more and more general formal solutions to a number of application cases are required. Several situations in image processing simultaneously require high spatial and high spectral information in a single composite image. This is important in medical diagnosis, remote sensing. However, the multimodalities are not capable of providing such information either by design or because of observational constraints. One possible solution for this is data fusion. Image fusion is the process of combining information from two or more multimodalities images of a scene into a single composite image that is more informative and is more suitable for visual perception or computer processing. The aim of multimodalities image fusion is to integrate complementary information of multi-sensor, multi-temporal and/or multitier into one new image. The goal is to reduce uncertainty and minimize redundancy in the output while maximizing relevant information particular to an specific application or task. An area based maximum selection rule, and consistency verification steps are used for feature selection. The algorithms are checked for multi sensor as well as multi focus image fusion.

Complete Chapter List

Search this Book: