Deep Learning on Edge: Challenges and Trends

Deep Learning on Edge: Challenges and Trends

Mário P. Véstias
Copyright: © 2020 |Pages: 20
DOI: 10.4018/978-1-7998-2112-0.ch002
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Deep learning on edge has been attracting the attention of researchers and companies looking to provide solutions for the deployment of machine learning computing at the edge. A clear understanding of the design challenges and the application requirements are fundamental to understand the requirements of the next generation of edge devices to run machine learning inference. This chapter reviews several aspects of deep learning: applications, deep learning models, and computing platforms. The way deep learning is being applied to edge devices is described. A perspective of the models and computing devices being used for deep learning on edge are given, as well as what challenges face the hardware designers to guarantee the vast set of tight constraints like performance, power consumption, flexibility, etc. of edge computing platforms. Finally, a trends overview of deep learning models and architectures is discussed.
Chapter Preview
Top

Background

Machine learning is a subfield of artificial intelligence whose objective is to give systems the capacity to learn and improve by its own without being explicitly programmed to do it. Machine learning algorithms extract features from data and build models from it so that new decisions and new outcomes are produced without being programmed a priori with these models and rules.

There are many types of machine learning algorithms with different approaches and application targets: Bayesian (Barber, 2012), clustering (Bouveyron et al., 2019), instance-based (Keogh, 2011), ensemble (Zhang, 2012), artificial neural network (Haykin, 2008), deep learning network (Patterson & Gibson, 2017), decision tree (Quinlan, 1992), association rule learning (Zhang & Zhang, 2002), regularization (Goodfellow et al., 2016), regression (Matloff, 2017), support-vector machine (Christmann & Steinwart, 2008) and others.

Key Terms in this Chapter

Deep Neural Network (DNN): An artificial neural network with multiple hidden layers.

Convolutional Neural Network (CNN): A class of deep neural networks applied to image processing where some of the layers apply convolutions to input data.

Pooling Layer: A network layer that determines the average pooling or max pooling of a window of neurons. The pooling layer subsamples the input feature maps to achieve translation invariance and reduce over-fitting.

Unsupervised Training: A training process of neural networks where the training set does not have the associated outputs.

Deep Learning (DL): A class of machine learning algorithms for automation of predictive analytics.

Convolutional Layer: A network layer that applies a series of convolutions to a block of input feature maps.

Semi-Supervised: A training process of neural networks that mixes supervised and unsupervised training.

Hopfield Network: A dense neural network where all neurons connect to all other neurons.

Feature Map: A feature map is a 2D matrix of neurons. A convolutional layer receives a block of input feature maps and generates a block of output feature maps.

Deep Belief Network: A probabilistic generative model with multiple layers of the so called latent variables tha keep the state of the network.

Machine Learning: A subfield of artificial intelligence whose objective is to give systems the ability to learn and improve by its own without being explicitly programmed to do it.

Autoencoder: An unsupervised learning network and is used to encode an input with a representation with fewer dimensions.

Long-short Term Memory Network: A variation of recurrent neural networks to reduce the vanishing problem.

Perceptron: The basic unit of a neural network that encodes inputs from neurons of the previous layer using a vector of weights or parameters associated with the connections between perceptrons.

Artificial Neural Network (ANN): It is a computing model based on the structure of the human brain with many interconnected processing nodes that model input-output relationships. The model is organized in layers of nodes that interconnect to each other.

Recurrent Neural Network (RNN): A class of deep neural networks consisting of dense networks with state.

Fully Connected Layer: A network layer where all neurons of the layer are connected to all neurons of the previous layer.

Boltzman Machine: An unsupervised network that maximizes the product of probabilities assigned to the elements of the training set.

Edge Device: any hardware device that serves as an entry point of data and may store, process and/or send the data to a central server.

Softmax Function: A function that takes as input a vector of k values and normalizes it into a probability distribution of k probabilities.

Complete Chapter List

Search this Book:
Reset