Challenges of Applying Deep Learning in Real-World Applications

Challenges of Applying Deep Learning in Real-World Applications

Amit Kumar Tyagi (School of Computing Science and Engineering, Vellore Institute of Technology, Chennai, India) and G. Rekha (Department of Computer Science and Engineering, Koneru Lakshmaiah Educational Foundation, Hyderabad, India)
DOI: 10.4018/978-1-7998-0182-5.ch004


Due to development in technology, millions of devices (internet of things: IoTs) are generating a large amount of data (which is called as big data). This data is required for analysis processes or analytics tools or techniques. In the past several decades, a lot of research has been using data mining, machine learning, and deep learning techniques. Here, machine learning is a subset of artificial intelligence and deep learning is a subset of machine leaning. Deep learning is more efficient than machine learning technique (in terms of providing result accurate) because in this, it uses perceptron and neuron or back propagation method (i.e., in these techniques, solve a problem by learning by itself [with being programmed by a human being]). In several applications like healthcare, retails, etc. (or any real-world problems), deep learning is used. But, using deep learning techniques in such applications creates several problems and raises several critical issues and challenges, which are need to be overcome to determine accurate results.
Chapter Preview

Introduction About Deep Learning

Defining Multi-Platform

The definition of Artificial Intelligence (AI) is quite long and easy to understand, i.e., it refers to the simulation of intelligent behavior by computer systems. Deep Learning is a subset of Machine Learning, i.e., used to produces prediction from a large collection of data (using analysis techniques or methods, and learning from its own), whereas Machine learning is a subset of Artificial Intelligence, to produces predictions (using human being help in analysis). Machine Learning is defined as “ML is a field of study that gives computers the capability to learn without being explicitly programmed” (Tyagi, A. K. (2019), Samuel, A. L. (2000)), and Awad, M., & Khanna, R. (2015). Machine learning uses algorithms like supervised (regression, decision tree, random forest, classification) and unsupervised (clustering, association analysis, Hidden Markov Model, etc.). Here, supervised, unsupervised and re-enforcement learning are three types of machine learning. Such behavior could encompass any range of tasks for which human intelligence is normally necessary, such as visual pattern recognition, language translations, and decision-making (, (2018)). On the other hand, Deep Learning uses Artificial Neural Networks (ANN) inspired by knowledge of human brain biology, from which emerge algorithms which are highly efficient in solving classification problems. In general terms, deep learning is an aspect of AI that uses complex hierarchical neural networks and lots of precise data to make machines capable of learning things automatically by its own (like a human being learns). Today’s the market value of deep learning software is growing at a higher rate, i.e., from $20 million (in 2018) to $930 million (by 2025). Deep Learning is the spearhead of Artificial Intelligence (see figure 1), and it is one of the most exciting technologies of the recent/ past decade. Now days, this kind of learning techniques are being used in several applications/ areas like such as recognizing speech or detecting cancer, etc.

Deep Learning is often compared to the mechanisms that underlie the human mind, and some experts believe that it will continue to advance at an unexpected growth and conquer many more fields/ areas (, (2018)). In some cases, there is fear that deep learning might threaten the very social and economic fabrics that hold our societies together, by either driving humans into unemployment or slavery. There is no doubt that machine learning and deep learning are super-efficient for many tasks (while are interrelated to each other (refer figure 1)). However, there are not universal techniques (even Deep Learning) which can solve all problems and override all previous technologies. This technologies/ learning technique (in many cases) have several/ its distinct limits and challenges which prevent it from competing with the mind of a human being. Human beings can learn abstract, broad relationships (Seymour, V. (2016)) between different concepts and make decisions with little information. In brief, deep learning algorithms are narrow in their capabilities and need precise information to do their job/ analysis information. Note that as discussed above, Deep Learning is a subset of Machine Learning (Tyagi, A. K. (2019)). that achieves great power and flexibility by learning, to represent the world as nested hierarchy of concepts, with each concept defined in relation to simpler concepts, and more abstract representations computed in terms of less abstract ones, cost, mistakes. A deep learning technique learn categories incrementally through its hidden layer architecture, defining low-level to high level categories like letters to words then to sentences, for example, in image recognition it means identifying light/dark areas before categorizing lines and then shapes to allow face recognition.

Figure 1.

Relation of deep learning with machine leaning with artificial intelligence


In deep learning, each neuron or node in the network represents one aspect of the whole and together they provide a full representation of the image. Each node or hidden layer is given a weight that represents the strength of its relationship with the output and as the model develops the weights are adjusted. Today’s popular example of Deep Learning (in case of learning itself) is ‘Ex-Machina’ (a Hollywood movie, released in 2014). In this movie, a machine (a robot: machine with intelligence) tries to attack on human being (with thinking), also fall attached with a human. In near future, this kind of future will be interesting when both machine and people are working together, with understanding each other (without doing any malicious activity). Apart that, Deep learning exciting process are: Face recognition, image classification, speech recognition, text-to-speech generation, handwriting transcription, machine translation, medical diagnosis (Cao, C., Liu, F., (2018), & Litjens, G., Kooi, T., (2017), cars (drivable area, lane keeping), digital assistants, Ads, search, social recommendations, game playing with deep RL. Hence now history of Deep Learning is investigated (year-wise) as:

  • 1943: Neural Networks

  • 1957: Perceptron

  • 1974-86: Back-propagation, RBM, RNN

  • 1989-98: CNN, MNIST, LSTM, Bi-directional RNN

  • 2006: “Deep Learning”, DBN

  • 2009: ImageNet

  • 2012: AlexNet, Dropout

  • 2014: GANs

  • 2014: DeepFace

  • 2016: AlphaGo

  • 2017: AlphaZero, Capsule Networks

  • 2018: BERT

Further, the history of Deep Learning Tools (year-wise) is included as:

  • Mark 1 perceptron – 1960

  • Torch – 2002

  • CUDA - 2007

  • Theano – 2008

  • Caffe – 2014

  • DistBelief – 2011

  • TensorFlow 0.1- 2015

  • PyTorch 0.1 – 2017

  • TensorFlow 1.0 – 2017

  • PyTorch 1.0 – 2017

  • TensorFlow 2.0 - 2019

Hence, now each section will discusses essential topics related to deep learning (in detail).

Complete Chapter List

Search this Book: