Multilayer Perceptron Based Equalizer with an Improved Back Propagation Algorithm for Nonlinear Channels

Multilayer Perceptron Based Equalizer with an Improved Back Propagation Algorithm for Nonlinear Channels

Zohra Zerdoumi, Djamel Chikouche, Djamel Benatia
DOI: 10.4018/IJMCMC.2016070102
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Neural network based equalizers can easily compensate channel impairments; such additive noise and inter symbol interference (ISI). The authors present a new approach to improve the training efficiency of the multilayer perceptron (MLP) based equalizer. Their improvement consists on modifying the back propagation (BP) algorithm, by adapting the activation function in addition to the other parameters of the MLP structure. The authors report on experiment results evaluating the performance of the proposed approach namely the back propagation with adaptive activation function (BPAAF) next to the BP algorithm. To further prove its effectiveness, the proposed approach is also compared beside a so known nonlinear equalizer explicitly the multilayer perceptron with decision feedback equalizer MLPDFE. The authors consider various performance measures specifically: signal resorted quality, lower steady state MSE reached and minimum bit error rate (BER) achieved, where nonlinear channel equalization problems are employed.
Article Preview
Top

Introduction

In wireless communication systems, there is an ever increasing need for high speed data transmission rate over the diversity of limited bandwidth channels which distort the digital signal and causes inter symbol interference (ISI) (Qureshi, 1985). Multi-path propagation, nonlinear effect of amplifiers and converters, may lead to inevitable nonlinear distortion, it is then important to restore the input to a channel by observing its output. A simple approach is to design a finite impulse filter (FIR) that takes the observations of the channel and reconstructs its inputs, such approach is called equalization (Proakis, 2001; Qureshi, 1985, Mehmet et al., 2013).

Due to the effectiveness of equalization, it has attracted a lot of attention (Baloch et al., 2012; Ibnnkahla, 2000; Mehmet et al., 2013; Santamaria et al., 2002; Sunita et al., 2015; Zhao et al., 2011; Zerguine et al., 2001). Equalization techniques are linear and nonlinear. Nonlinear structures are superior to linear ones; in particular, on non-minimum phase or nonlinear channels (Gibson et al., 1989; Zerguine et al., 2001; Sunita et al., 2015). More recently, artificial neural networks (ANN) have attracted a great attention, as they can perform complex mapping between input and output spaces and are capable of forming nonlinear decision boundaries (Baloch et al., 2012; Power et al., 2001; Sunita et al., 2015; Zerdoumi et al., 2015; Zerguine et al., 2001). Many researches have shown that an ANN-based equalizers can provide better system performance than conventional equalizers (Amgothu & Kalaichelvi, 2015; Baloch et al., 2012; Corral et al., 2010; Lyu et al., 2015; Sunita et al., 2015; Zerdoumi et al., 2015).

Burse et al (2010) presented a review of various neural network based equalizers architectures; their learning methods are also discussed. Among all these architecture, the most widely used is the Multilayer perceptron (MLP) architecture due to its stability, finite parameterization, and its simple implementation (Baloch et al, 2012; Zerdoumi et al, 2015; Zerguine et al, 2001).

The back propagation (BP) training algorithm is a supervised learning method for the MLP (Haykin, S, 1999). Despite the BP success in many applications, its convergence rate still too slow. Researches have been reported to the BP algorithm in order to improve its efficiency (Saduf, 2013; Wang et al., 2004).

An overview of learning strategies in ANN as well as numerous improvements in steepest descent through BP algorithm was provided in Schmidhuber (2015).

Different approaches have been taken so far to speed up BP. Besides adding the momentum term in the weight updating formulas (Gibson et al., 1989; Zerguine et al., 2001), the selection of dynamic learning rate and momentum (Holger & Graeme, 1998; Norhamreeza et al.,2011; Thimm et al., 1996), suggestion of a slope tuning of activation function has been also proposed (Castro et al., 1999; Chandra & Singh,2004; Daqi & Genxing, 2003; Thimm et al., 1996; Xu & Zhang, 2001; Yu et al., 2002).

Among BP learning speed up algorithms, those using the adaptation of the activation function plays a decisive role (Daqi & Genxing, 2003; Chandra & Singh, 2004).

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024)
Volume 14: 1 Issue (2023)
Volume 13: 4 Issues (2022): 2 Released, 2 Forthcoming
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2014)
Volume 5: 4 Issues (2013)
Volume 4: 4 Issues (2012)
Volume 3: 4 Issues (2011)
Volume 2: 4 Issues (2010)
Volume 1: 4 Issues (2009)
View Complete Journal Contents Listing