Maxout Networks for Visual Recognition

Maxout Networks for Visual Recognition

Gabriel Castaneda, Paul Morris, Taghi M. Khoshgoftaar
DOI: 10.4018/IJMDEM.2019100101
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This study investigates the effectiveness of multiple maxout activation variants on image classification, facial identification and verification tasks using convolutional neural networks. A network with maxout activation has a higher number of trainable parameters compared to networks with traditional activation functions. However, it is not clear if the activation function itself or the increase in the number of trainable parameters is responsible for yielding the best performance on different entity recognition tasks. This article investigates if an increase in the number of convolutional filters on the rectified linear unit activation performs equal-to or better-than maxout networks. Our experiments compare rectified linear unit, leaky rectified linear unit, scaled exponential linear unit, and hyperbolic tangent to four maxout variants. Throughout the experiments, we found that on average, across all datasets, the rectified linear unit networks perform better than any maxout activation when the number of convolutional filters is increased six times.
Article Preview
Top

Introduction

Visual recognition methods have a wide range of applications in a variety of areas including tracking objects (Doulamis & Voulodimos, 2016), activity recognition (Lin et al., 2016), pattern recognition tasks (Tygert et al., 2016), semantic segmentation (Long, Shelhamer, & Darrell, 2015), medical imaging (Suzuki, 2017), human pose estimation (Toshev & Szegedy, 2014), automatic image annotation (Murthy, Maji, & Manmatha, 2015), human computer interaction (Nishikawa & Bae, 2018), and object counting (Heinrich, Roth, & Zschech, 2019). Current methods work reasonably well in constrained domains but are quite sensitive to clutter and occlusion. Object recognition research has made notable steps since the appearance of Convolutional Neural Networks (CNNs).

An activation function in a neural network is a transfer function that transforms the weighted sum of a neuron into an output signal. The output signal is then used as an input to the next layer in the stack. The activation function introduces nonlinearities to CNNs (LeCun & Bengio, 1995), which are required for multi-layer networks to detect nonlinear features. Commonly used activation functions include sigmoid, hyperbolic tangent (tanh), and Rectified Linear Unit (ReLU) (Nair & Hinton, 2010).

Deep Neural Networks (DNNs) are models (networks) composed of many layers that transform input data to outputs while learning increasingly higher-level features. Deep learning allows multiple processing layers to learn and represent data with multiple levels of abstraction imitating how the brain perceives and understands multimodal information, and thus implicitly capturing intricate structures of large-scale data (Voulodimos, Doulamis, Doulamis, & Protopapadakis, 2018). DNNs are the best performing models on computer vision object recognition benchmarks and yield human performance levels on object categorization (Russakovsky et al., 2015; He, Zhang, Ren, & Sun, 2015). Deep CNNs have the unique capability of feature learning, that is, of automatically learning features based on the given dataset. The activation function plays a major role in the success of training DNNs, but there is a lack of consensus on how to select a good activation function for deep learning, and a specific function may not be suitable for all applications (Castaneda, Morris, & Khoshgoftaar, 2019).

DNNs have successfully utilized sigmoidal units, but sigmoidal activation functions suffer from gradient saturation. The major drawback of the sigmoid and the tanh functions is the very small gradients yielded by the saturation regions at both ends. With the increase of the slope parameter in sigmoid and tanh functions, the saturation regions get larger. Moreover, ReLU saturates when inputs are negative. These saturation regions cause gradient diffusion and block gradients from propagating to deeper layers (Li, Ng, Yeung, & Chan, 2014). For this reason, different activation functions have been proposed for neural network training. Compared to traditional activation functions, like the logistic sigmoid units or tanh units, which are anti-symmetric, ReLU is one-sided. This property encourages the network to be sparse, and thus more biologically plausible (Li, Ding, & Li, 2018). The use of ReLU was a breakthrough that enabled the fully supervised training of state-of-the-art DNNs (Krizhevsky, Sutskever, & Hinton, 2012). Because of its simplicity and effectiveness, ReLU has become the default activation function used across the deep learning community.

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024)
Volume 14: 1 Issue (2023)
Volume 13: 4 Issues (2022): 1 Released, 3 Forthcoming
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing