Blind Deconvolution of Sources in Fourier Space Based on Generalized Laplace Distribution

Blind Deconvolution of Sources in Fourier Space Based on Generalized Laplace Distribution

M. El-Sayed Waheed, Mohamed-H Mousa, Mohamed-K Hussein
Copyright: © 2013 |Pages: 11
DOI: 10.4018/ijsda.2013040104
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

An approach to multi-channel blind de-convolution is developed, which uses an adaptive filter that performs blind source separation in the Fourier space. The approach keeps (during the learning process) the same permutation and provides appropriate scaling of components for all frequency bins in the frequency space. Experiments indicate that Generalized Laplace Distribution can be used effectively to blind de-convolution of convolution mixtures of sources in Fourier space compared to the conventional Laplacian and Gaussian function.
Article Preview
Top

2. The Bss/Mbd Problems

The blind source separation task. Assume that there exist ijsda.2013040104.m01 zero-mean source signals, ijsda.2013040104.m02, that are scalar valued and mutually (spatially) statistically independent (or as independent as possible) at each time instant or index value ijsda.2013040104.m03 number ijsda.2013040104.m04 of sources (Amari, Douglas, Cichocki, & Yang, 1997; Cichocki & Amari, 2002). Denote ijsda.2013040104.m05 the m-dimensional ijsda.2013040104.m06-th mixture data vector, at discrete index value (time) t. The blind source separation (BSS) mixing model is equal to:

ijsda.2013040104.m07
(1) where N is noise signal. A well-known iterative optimization method is the stochastic gradient (or gradient descent) search (Zeckhauser & Thompson, 1970). In this method the basic task is to define a criterion ijsda.2013040104.m08, which obtains its minimum for some ijsda.2013040104.m09 if this ijsda.2013040104.m10 is the expected optimum solution. Applying the natural gradient descent approach (Amari, Douglas, Cichocki, & Yang, 1997; Cichocki & Amari, 2002) with the cost function, based on Kullback-Leibler divergence, we may derive the learning rule for BSS:

Complete Article List

Search this Journal:
Reset
Volume 12: 1 Issue (2024): Forthcoming, Available for Pre-Order
Volume 11: 5 Issues (2022)
Volume 10: 4 Issues (2021)
Volume 9: 4 Issues (2020)
Volume 8: 4 Issues (2019)
Volume 7: 4 Issues (2018)
Volume 6: 4 Issues (2017)
Volume 5: 4 Issues (2016)
Volume 4: 4 Issues (2015)
Volume 3: 4 Issues (2014)
Volume 2: 4 Issues (2013)
Volume 1: 4 Issues (2012)
View Complete Journal Contents Listing