Article Preview
TopIntroduction
In wireless communication systems, there is an ever increasing need for high speed data transmission rate over the diversity of limited bandwidth channels which distort the digital signal and causes inter symbol interference (ISI) (Qureshi, 1985). Multi-path propagation, nonlinear effect of amplifiers and converters, may lead to inevitable nonlinear distortion, it is then important to restore the input to a channel by observing its output. A simple approach is to design a finite impulse filter (FIR) that takes the observations of the channel and reconstructs its inputs, such approach is called equalization (Proakis, 2001; Qureshi, 1985, Mehmet et al., 2013).
Due to the effectiveness of equalization, it has attracted a lot of attention (Baloch et al., 2012; Ibnnkahla, 2000; Mehmet et al., 2013; Santamaria et al., 2002; Sunita et al., 2015; Zhao et al., 2011; Zerguine et al., 2001). Equalization techniques are linear and nonlinear. Nonlinear structures are superior to linear ones; in particular, on non-minimum phase or nonlinear channels (Gibson et al., 1989; Zerguine et al., 2001; Sunita et al., 2015). More recently, artificial neural networks (ANN) have attracted a great attention, as they can perform complex mapping between input and output spaces and are capable of forming nonlinear decision boundaries (Baloch et al., 2012; Power et al., 2001; Sunita et al., 2015; Zerdoumi et al., 2015; Zerguine et al., 2001). Many researches have shown that an ANN-based equalizers can provide better system performance than conventional equalizers (Amgothu & Kalaichelvi, 2015; Baloch et al., 2012; Corral et al., 2010; Lyu et al., 2015; Sunita et al., 2015; Zerdoumi et al., 2015).
Burse et al (2010) presented a review of various neural network based equalizers architectures; their learning methods are also discussed. Among all these architecture, the most widely used is the Multilayer perceptron (MLP) architecture due to its stability, finite parameterization, and its simple implementation (Baloch et al, 2012; Zerdoumi et al, 2015; Zerguine et al, 2001).
The back propagation (BP) training algorithm is a supervised learning method for the MLP (Haykin, S, 1999). Despite the BP success in many applications, its convergence rate still too slow. Researches have been reported to the BP algorithm in order to improve its efficiency (Saduf, 2013; Wang et al., 2004).
An overview of learning strategies in ANN as well as numerous improvements in steepest descent through BP algorithm was provided in Schmidhuber (2015).
Different approaches have been taken so far to speed up BP. Besides adding the momentum term in the weight updating formulas (Gibson et al., 1989; Zerguine et al., 2001), the selection of dynamic learning rate and momentum (Holger & Graeme, 1998; Norhamreeza et al.,2011; Thimm et al., 1996), suggestion of a slope tuning of activation function has been also proposed (Castro et al., 1999; Chandra & Singh,2004; Daqi & Genxing, 2003; Thimm et al., 1996; Xu & Zhang, 2001; Yu et al., 2002).
Among BP learning speed up algorithms, those using the adaptation of the activation function plays a decisive role (Daqi & Genxing, 2003; Chandra & Singh, 2004).