Current Trends in Deep Learning Frameworks With Opportunities and Future Prospectus

Current Trends in Deep Learning Frameworks With Opportunities and Future Prospectus

Chitra A. Dhawale, Krtika Dhawale
Copyright: © 2020 |Pages: 15
DOI: 10.4018/978-1-7998-1159-6.ch003
OnDemand:
(Individual Chapters)
Available
$33.75
List Price: $37.50
10% Discount:-$3.75
TOTAL SAVINGS: $3.75

Abstract

Artificial Intelligence (AI) is going through its golden era by playing an important role in various real-time applications. Most AI applications are using Machine learning and it represents the most promising path to strong AI. On the other hand, Deep Learning (DL), which is itself a kind of Machine Learning (ML), is becoming more and more popular and successful at different use cases, and is at the peak of developments. Hence, DL is becoming a leader in this domain. To foster the growth of the DL community to a greater extent, many open source frameworks are available which implemented DL algorithms. Each framework is based on an algorithm with specific applications. This chapter provides a brief qualitative review of the most popular and comprehensive DL frameworks, and informs end users of trends in DL Frameworks. This helps them make an informed decision to choose the best DL framework that suits their needs, resources, and applications so they choose a proper career.
Chapter Preview
Top

Introduction

Artificial intelligence (AI) is everywhere, possibility is that we are using it one way or the other and sometimes even we do not know about it. In recent years, most of the researchers are getting attracted towards AI domain due to its major applications including multimedia (text, image, speech, video) recognition, social network analysis, data mining, natural language processing, driverless car and so forth are using machine learning which leads this sub domain to the top of popularity amongst researchers and industrialist. Most AI applications are indeed using Machine learning and it currently represents the most promising path to strong AI. On the other hand, Deep Learning (DL), which is itself a kind of Machine Learning (ML) is becoming more and more popular and successful at different use cases and is at the peak of developments hence DL is becoming a leader in this domain.

One can implement simple deep learning algorithms from scratch using python or any other programming language, But it becomes difficult and time consuming task for individual programmer to implement the complex models or algorithms such as Convolutional Neural Network (CNN) (Hinton, G. E., 2009), (LeCunn, Bengio, & Hinton, 2015), (Al-Ayyoub, et. al, 2018) Recurring Neural Network (RNN) [Cho et al.,, 2014; Hinton, G. (2015)., Al-Ayyoub, et. al, 2018, or Deep Generative Networks (DGN) (Hinton, G. E., 2009), (LeCunn, Bengio, & Hinton, 2015), (Al-Ayyoub, et. al, 2018)which are frequently needed as a part of these applications. Deep learning frameworks offer building blocks for designing, training and validating deep neural networks, through a high level programming interface. Frameworks make the development and deployment of applications easy and fast. Almost all frameworks are open source and can easily downloaded and used in programming part.

Few papers are published related to the comparative analysis related to hardware performance, applications and other features of various Deep learning frameworks. But no one has covered the details about the current state of art for various frameworks. This type of study and analysis are very useful in order to enable people who are interested in applying Deep Learning in their applications and or/research work so that they can choose proper framework suitable for their work.

Bahrampour et al. (2015) published the comparative studies between DL frameworks. The authors compared five DL frameworks: Caffe, Neon, TensorFlow, Theano, and Torch, in terms of speed (gradient computation time and forward time), hardware utilization and extensibility (ability to support different types of DL architectures) after applying various convolutional algorithms on the aforementioned frameworks (Maas et al., 2011).

Shi et al. (2016) presented a comparative study between several DL frameworks including Caffe, MXNet, CNTK, TensorFlow, and Torch based on running time and convergence rate.

Goldsborough (2016) showed the timeline of machine learning software libraries for DL. He focused on TensorFlow’s results and its basic properties including computational paradigms, its distributed model and programing interface.

Chintala (2017) applied different ImageNet benchmarks for a variety convolutional network types including AlexNet, GoogleNet, Overfeat, and OxfordNet using different opensource DL frameworks such as Caffe, Theano, Torch, TensorFlow, Chainer, etc.

Kovalev et al. (2016) presented a comparative study between different DL frameworks namely Theano, Torch, Caffe, TensorFlow, and DeepLearning4J in terms of training and prediction speed and classification accuracy. They used MNIST dataset of handwritten digits for testing five FCN frameworks (Yu et al., 2014).

This chapter presents the unique review of various DL frameworks based on the current state of art and their applications.

Complete Chapter List

Search this Book:
Reset