Broad Autoencoder Features Learning for Classification Problem

Broad Autoencoder Features Learning for Classification Problem

Ting Wang, Wing W. Y. Ng, Wendi Li, Sam Kwong
DOI: 10.4018/IJCINI.20211001.oa23
Article PDF Download
Open access articles are freely available for download

Abstract

Activation functions such as Tanh and Sigmoid functions are widely used in Deep Neural Networks (DNNs) and pattern classification problems. To take advantages of different activation functions, the Broad Autoencoder Features (BAF) is proposed in this work. The BAF consists of four parallel-connected Stacked Autoencoders (SAEs) and each of them uses a different activation function, including Sigmoid, Tanh, ReLU, and Softplus. The final learned features can merge such features by various nonlinear mappings from original input features with such a broad setting. This helps to excavate more information from the original input features. Experimental results show that the BAF yields better-learned features and classification performances.
Article Preview
Top

1. Introduction

With the quick advancement and deployment of information technologies, there are gigantic volumes of information in various organizations accessible on the Internet, for example, video, image, and medical data. The need of mining helpful data from these huge information poses a great challenge to the AI people group. Typical AI techniques requiring hand-crafted features cannot discover hidden information from data and may experience the ill effects of either data misfortune or overfitting. Conversely, DNNs have been effectively applied in a wide range of AI applications and delivered incredible outcomes in computational intelligence, e.g., video processing, image classification, speech recognition, and computer visual recognition.

Until recently, machine learning techniques can partition into generative and discriminative strategies. At present, the most commonly utilized DNN are generative models, e.g., the Deep Belief Networks (Hinton et al., 2006), the Restricted Boltzmann Machines (Salakhutdinov et al., 2007), and the Deep Boltzmann Machines (Salakhutdinov et al., 2009). These techniques prepared the log-probability gradient prepared to utilize MCMC-based strategies that turn out to be progressively uncertain as preparing advances. It is because examples from the Markov Chains cannot blend between models sufficiently quickly. Moreover, generative models, e.g. the Autoencoder (AE) (Bengio et al., 2009), the Variational Autoencoder (Kingma and Welling, 2014; Mescheder et al., 2017; Tan et al., 2018), and the Important Weighted Autoencoders (Burda et al., 2016) have created to utilize direct back-proliferation for preparing and maintaining a strategic distance from challenges yielded by the MCMC preparation. Each of these strategies considered as the projection that yields a considerable classification result by anticipating tests from the original feature space into a projected space with a better class-separability for pattern classification problems (Wasikowski et al., 2010). Among them, the AE (Bengio et al., 2009) is an unsupervised feature learning technique that aims to recover the representation to be roughly equivalent to the original sources of the inputs. The number of hidden units is normally bigger than the number of feature dimensions for feature representation learning. The projection at the hidden layer of the AE yields a helpful representation of the original sources of inputs (Bengio et al., 2009).

Complete Article List

Search this Journal:
Reset
Volume 18: 1 Issue (2024)
Volume 17: 1 Issue (2023)
Volume 16: 1 Issue (2022)
Volume 15: 4 Issues (2021)
Volume 14: 4 Issues (2020)
Volume 13: 4 Issues (2019)
Volume 12: 4 Issues (2018)
Volume 11: 4 Issues (2017)
Volume 10: 4 Issues (2016)
Volume 9: 4 Issues (2015)
Volume 8: 4 Issues (2014)
Volume 7: 4 Issues (2013)
Volume 6: 4 Issues (2012)
Volume 5: 4 Issues (2011)
Volume 4: 4 Issues (2010)
Volume 3: 4 Issues (2009)
Volume 2: 4 Issues (2008)
Volume 1: 4 Issues (2007)
View Complete Journal Contents Listing