Multilayer Perceptron New Method for Selecting the Architecture Based on the Choice of Different Activation Functions

Multilayer Perceptron New Method for Selecting the Architecture Based on the Choice of Different Activation Functions

Hassan Ramchoun, Mohammed Amine Janati Idrissi, Youssef Ghanou, Mohamed Ettaouil
Copyright: © 2019 |Pages: 14
DOI: 10.4018/IJISSS.2019100102
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Multilayer perceptron has a large amount of classifications and regression applications in many fields: pattern recognition, voice, and classification problems. But the architecture choice in particular, the activation function type used for each neuron has a great impact on the convergence and performance. In the present article, the authors introduce a new approach to optimize the selection of network architecture, weights, and activation functions. To solve the obtained model the authors use a genetic algorithm and train the network with a back-propagation method. The numerical results show the effectiveness of the approach shown in this article, and the advantages of the new model compared to the existing previous model in the literature.
Article Preview
Top

A number of approaches in the literature have taken account the architecture optimization as well as others consider the use of different activation function in the same network for only the Multilayer Perceptron with one hidden layer (Ivanov & Gavrilaş, 2014). This section describes only those works that are more or less similar to our work.

Global search may stop the convergence to a non-optimal solution and determine the optimum number of ANN hidden layers. Recently, some studies in the optimization architecture problems have been introduced in order to determine neural networks parameters, but not optimally (Lins & Ludermir, 2005).

Traditional algorithms fix the neural network architecture before learning (JOSEPH, 2008). Others studies propose constructive learning (Wang, Chaudhari, & Patra, 2004; Wang 2008). It begins with a minimal structure of hidden layers; these researchers initialized the hidden layers, with a minimal number of hidden layer neurons. The most of researchers treat the construction of neural architecture (structure) without finding the optimal neural architecture.

The authors in (Ludermir, Yamazaki, & Zanchettin, 2006) propose an approach for dealing with a few connections in one hidden layer and training with different hybrid optimization algorithms, and in other work they use different activation function in group of networks (Gomes, Ludermir, & Lima, 2011).

In our previous work we take account the optimization of hidden layers, connections used with introducing one decision variable for layers and another one for connections (Ramchoun, Idrissi, Ghanou, & Ettaouil, 2017), and in another work we have taken account the hidden node optimization in layers, for training this two models authors have used a back-propagation algorithms (Ettaouil, Lazaar, & Ghanou, 2013).

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024)
Volume 14: 4 Issues (2022): 3 Released, 1 Forthcoming
Volume 13: 4 Issues (2021)
Volume 12: 4 Issues (2020)
Volume 11: 4 Issues (2019)
Volume 10: 4 Issues (2018)
Volume 9: 4 Issues (2017)
Volume 8: 4 Issues (2016)
Volume 7: 4 Issues (2015)
Volume 6: 4 Issues (2014)
Volume 5: 4 Issues (2013)
Volume 4: 4 Issues (2012)
Volume 3: 4 Issues (2011)
Volume 2: 4 Issues (2010)
Volume 1: 4 Issues (2009)
View Complete Journal Contents Listing