Deep Learning in the Healthcare Industry: Theory and Applications

Deep Learning in the Healthcare Industry: Theory and Applications

Zahra A. Shirazi (Department of Statistical and Actuarial Sciences, The University of Western Ontario, Canada), Camila P. E. de Souza (Department of Statistical and Actuarial Sciences, The Univeristy of Western Ontario, Canada), Rasha Kashef (Electrical, Computer and Biomedical Engineering Department, Ryerson University, Canada) and Felipe F. Rodrigues (School of Management, Economics and Mathematics, King's University College at Western University, Canada)
DOI: 10.4018/978-1-7998-2581-4.ch010


Artificial Neural networks (ANN) are composed of nodes that are joint to each other through weighted connections. Deep learning, as an extension of ANN, is a neural network model, but composed of different categories of layers: input layer, hidden layers, and output layers. Input data is fed into the first (input) layer. But the main process of the neural network models is done within the hidden layers, ranging from a single hidden layer to multiple ones. Depending on the type of model, the structure of the hidden layers is different. Depending on the type of input data, different models are applied. For example, for image data, convolutional neural networks are the most appropriate. On the other hand, for text or sequential and time series data, recurrent neural networks or long short-term memory models are the better choices. This chapter summarizes the state-of-the-art deep learning methods applied to the healthcare industry.
Chapter Preview

Deep Learning Models

In this section, five of the leading deep learning analytical models are discussed. For each model, we define the basic learning fundamentals, usage, and summarize its architecture. Each model aims at either predicting, classifying, or clustering subjects based on a training set of responses/labels and features.

In general, Artificial Neural Networks (ANN) models consist of three primary layers, namely input, hidden, and output; with differences in each layer giving rise to each specific model. To pave the way of discussions in the next sections, the list of abbreviations used in this chapter is summarized in Table 1.

Table 1.
List of abbreviations and symbols
ANNArtificial Neural Network
AUCArea under the ROC curve
BMUBest Matching Unit
BPABack-Propagation Algorithm
CNNConvolutional Neural Network
CPNNCompetitive Neural Network
CRFConditional Random Field
DDIDrug-Drug Interaction
EHRElectronic Health Records
FF-BP NNFeedforward-Back Propagation Neural Network
GRUGated Recurrent Units
IoMTInternet of Medical Things
KSOMKohonen Self-Organizing Map
LRLogistic Regression
LSTMLong Short-Term Memory
LVQLearning Vector Quantization
MLPMultilayer Perceptron
MSEMean squared error
RBFRadial Basis Function
RFRandom Forest
RMSERoot Mean Squared Error
RNNRecurrent Neural Network
ROCReceiver Operating Curve
SMAPESymmetric Mean Absolute Percentage Error
SOMSelf-Organizing Map

Complete Chapter List

Search this Book: