Article Preview
Top2. The Bss/Mbd Problems
The blind source separation task. Assume that there exist zero-mean source signals, , that are scalar valued and mutually (spatially) statistically independent (or as independent as possible) at each time instant or index value number of sources (Amari, Douglas, Cichocki, & Yang, 1997; Cichocki & Amari, 2002). Denote the m-dimensional -th mixture data vector, at discrete index value (time) t. The blind source separation (BSS) mixing model is equal to:
(1) where
N is noise signal. A well-known iterative optimization method is the stochastic gradient (or gradient descent) search (Zeckhauser & Thompson, 1970). In this method the basic task is to define a criterion
, which obtains its minimum for some
if this
is the expected optimum solution. Applying the natural gradient descent approach (Amari, Douglas, Cichocki, & Yang, 1997; Cichocki & Amari, 2002) with the cost function, based on Kullback-Leibler divergence, we may derive the learning rule for BSS: