Model Optimisation Techniques for Convolutional Neural Networks

Model Optimisation Techniques for Convolutional Neural Networks

Sajid Nazir, Shushma Patel, Dilip Patel
DOI: 10.4018/978-1-7998-8686-0.ch011
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Deep neural networks provide good results for computer vision tasks. This has been possible due to a renewed interest in neural networks, availability of large-scale labelled training data, virtually unlimited processing and storage on cloud platforms and high-performance clusters. A convolutional neural network (CNN) is one such architecture better suited for image classification. An important factor for a better CNN performance, besides the data quality, is the choice of hyperparameters, which define the model itself. The model or hyperparameter optimisation involves selecting the best configuration of hyperparameters but is challenging because the set of hyperparameters are different for each type of machine learning algorithm. Thus, it requires a lot of computational time and resources to determine a better performing machine learning model. Therefore, the process has a lot of research interest, and currently a transition to a fully automated process is also underway. This chapter provides a survey of the CNN model optimisation techniques proposed in the literature.
Chapter Preview
Top

Introduction

Artificial Intelligence (AI) is transforming the healthcare, financial, academic, entertainment and industrial domains and is the driving engine for the applications we use every day. The increased processing power of Graphical Processing Units (GPUs) and the availability of large image datasets has fostered a renewed interest in extracting semantic information from images. This in part is due to the large amounts of visual data available with the rise of big data and social media networks. This coupled with the advances in storage and processing technologies has made it possible to progress from image processing to interpreting images for extracting contextual information.

Machine learning is a sub-field of AI that makes it possible for the machine learning models to make predictions without explicitly being programmed (Neetesh, 2017). Machine learning for vision problems comprises techniques that can provide intelligent solutions to complex problems of interpreting and describing a scene, given sufficient data. Much progress has been made in this area although improvements are still needed. In a machine learning model, we have two types of parameters. Model parameters are initialised and get updated through the learning process, such as neuron weights in neural networks. The other type are the hyperparameters that have to be set before training a model, as these define the model architectures (Yang, 2020). The hyperparameter type could be continuous, categorical or an integer (Victoria & Maragatham, 2021).

A general sequence of steps to be followed for a machine learning application is shown in Figure 1. The quality and quantity of data together with an optimum set of hyperparameters governs the performance of any machine learning model. There are many traditional machine learning approaches such as Random Forest but the recent trend for image classification applications is the use of deep learning.

Figure 1.

General sequence of steps for a machine learning application.

978-1-7998-8686-0.ch011.f01

Deep learning is a branch of machine learning (Bhandare & Kaur, 2018) that derives its name from neural networks that comprise of many layers. Multiple layers are used to model high-level features from complex data, with each successive layer using the outputs from the preceding layer as an input (Benuwa, 2016). The increased research interest in neural networks is due to the promising results obtained for ImageNet competitions (Krizhevsky et al., 2012). A review of recent advances in deep learning is provided in Minar (2018) as well as taxonomy of deep learning techniques and applications. A review of deep supervised learning, unsupervised learning and reinforcement learning is provided in Schmidhuber (2015) covering developments since 1940. Benuwa (2016) reviewed deep learning techniques along with algorithm principles and architectures for deep learning.

Key Terms in this Chapter

Hyperparameter: These are the parameters that define the model itself and have to be set before a model can be trained. These are different from the parameters that the model learns during training such as node weights.

Deep Learning: A recent branch of machine learning based on neural network architectures that are modelled after the human brain. Deep learning provides excellent results for computer vision, natural language processing, etc.

Ensemble: A technique to combine the outputs of two or more models to form a better prediction result than that provided by either of the models.

Automated Machine Learning: Automated machine learning automates the complex and time-consuming process of model development and is thus very useful for machine learning specialists. It also makes it possible for non-experts in the machine learning domain to create machine learning models that provide optimum results.

Explainable Artificial Intelligence (AI): A form of AI in which the process and the model helps a human, usually a non-expert, in understanding why a particular model outcome was generated. This increases the degree of trust in the outcomes, and the accompanying confidence to act on the outcomes of an AI based system.

Convolutional Neural Networks: It is the most common deep learning architecture and is based on the convolution process. These models are mainly applied to the computer vision applications.

Hyperparameter Optimisation: A process through which a configuration of hyperparameters is selected from the available range of the hyperparameter values so as to achieve an optimum model performance. The model performance to a large extent is governed by the choice of its hyperparameters.

Complete Chapter List

Search this Book:
Reset