Cloud-Based Image Fusion Using Guided Filtering

Cloud-Based Image Fusion Using Guided Filtering

Piyush Kumar Shukla (UIT RGPV, India) and Madhuvan Dixit (Millennium Institute of Technology and Science, India)
DOI: 10.4018/978-1-4666-8654-0.ch008
OnDemand PDF Download:
$30.00
List Price: $37.50

Abstract

Current image coding with image fusion schemes make it hard to utilize external images for transform even if highly correlated images can be found in the cloud. To solve this problem, we explain an approach of cloud-based image transform coding with image fusion methodwhich is distinguish from exists image fusion method. A fast and efficient image fusion technique is proposed for creating a highly generated fused image through merging multiple corresponding images. The proposed technique is based on a two-scale decomposition of an image into a low layer containing large scale variations, and a detail layer acquiring small scale details. A novel approach of guided filtering-based weighted average method is proposed to make full use of spatial consistency for merge of the base and detail layers. Analytical results represent that the proposed technique can obtain state-of-the-art performance for image fusion of multispectral, multifocus, multimodal, and multiexposure images.
Chapter Preview
Top

Literature Review

A large number of image fusion methods (Wang, 2004)– (Mandic, 2009) have been proposed in literature. Among these methods, multi-scale image fusion (Cruz, 2004) and data-driven image fusion (Mandic, 2009) are very successful methods. They focus on different data representations, e.g., multi-scale co-efficient Crow, (Jan. 1984), (Rockinger, 1997), or data driven decomposition co-efficient (Mandic, 2009), (Zeng, 2012) and different image fusion rules to guide the fusion of co-efficient. The major advantage of these methods is that they can well preserve the details of different source images. However, these kinds of methods may produce brightness and color distortions since spatial consistency is not well considered in the fusion process. To make full use of spatial context, optimization based image fusion approaches, e.g., generalized random walks (Shen 2011), and Markov random fields (Rockinger, 1997) based methods have been proposed. These methods focus on estimating spatially smooth and edge-aligned weights by solving an energy function and then fusing the source images by weighted average of pixel values. However, optimization based methods have a common limitation, i.e., inefficiency, since they require multiple iterations to find the global optimal solution. Moreover, another drawback is that global optimization based methods may over-smooth the resulting weights, which is not good for fusion(Varshney, 2011).

Key Terms in this Chapter

Filtering: A Confidence-Based Filtering method, named CBF, is investigated for cloud computing environment.

Cloud Resources: Is a model for enabling ubiquitous,convenient, on-demand network access to a shared pool of configurable computing resources(eg, networks, servers, storage, applications, and services).

Cloud Computing: Is a flourishing technology nowadays because of its scalability, flexibility, availability of resources and other features.

Images: The concept image consists of all the cognitive structure in the individual's mind that is associated with a given concept.

Image Fusion: Is a model for enabling ubiquitous,convenient, on-demand network access to a shared pool of configurable computing resources(eg, networks, servers, storage, applications, and services).

Complete Chapter List

Search this Book:
Reset